Test Report: Docker_Linux_crio 22141

                    
                      2191194101c4a9ddc7fa6949616ce2e0ec39dec5:2025-12-16:42801
                    
                

Test fail (30/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.54
44 TestAddons/parallel/Registry 14.15
45 TestAddons/parallel/RegistryCreds 0.4
46 TestAddons/parallel/Ingress 145.12
47 TestAddons/parallel/InspektorGadget 5.24
48 TestAddons/parallel/MetricsServer 5.32
50 TestAddons/parallel/CSI 40.17
51 TestAddons/parallel/Headlamp 2.47
52 TestAddons/parallel/CloudSpanner 5.24
53 TestAddons/parallel/LocalPath 8.08
54 TestAddons/parallel/NvidiaDevicePlugin 6.24
55 TestAddons/parallel/Yakd 5.24
56 TestAddons/parallel/AmdGpuDevicePlugin 5.26
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 2.36
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 2.29
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 2.29
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 2.3
294 TestJSONOutput/pause/Command 2.39
300 TestJSONOutput/unpause/Command 2.24
366 TestPause/serial/Pause 5.86
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.06
412 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.16
414 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.39
422 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.81
426 TestStartStop/group/old-k8s-version/serial/Pause 5.56
435 TestStartStop/group/no-preload/serial/Pause 6.11
437 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.17
443 TestStartStop/group/embed-certs/serial/Pause 6.34
449 TestStartStop/group/newest-cni/serial/Pause 6.32
454 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.68
x
+
TestAddons/serial/Volcano (0.54s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable volcano --alsologtostderr -v=1: exit status 11 (541.336576ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:27.848049   18221 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:27.848368   18221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:27.848376   18221 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:27.848381   18221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:27.848898   18221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:27.849382   18221 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:27.850342   18221 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:27.850382   18221 addons.go:622] checking whether the cluster is paused
	I1216 04:27:27.850495   18221 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:27.850509   18221 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:27.850870   18221 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:27.869770   18221 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:27.869819   18221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:27.888784   18221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:27.980736   18221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:27.980826   18221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:28.010517   18221 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:28.010542   18221 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:28.010548   18221 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:28.010560   18221 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:28.010565   18221 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:28.010569   18221 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:28.010574   18221 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:28.010579   18221 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:28.010587   18221 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:28.010604   18221 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:28.010612   18221 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:28.010617   18221 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:28.010625   18221 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:28.010629   18221 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:28.010636   18221 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:28.010647   18221 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:28.010658   18221 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:28.010664   18221 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:28.010668   18221 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:28.010672   18221 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:28.010680   18221 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:28.010685   18221 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:28.010690   18221 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:28.010694   18221 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:28.010698   18221 cri.go:89] found id: ""
	I1216 04:27:28.010746   18221 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:28.093969   18221 out.go:203] 
	W1216 04:27:28.169772   18221 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:28.169800   18221 out.go:285] * 
	* 
	W1216 04:27:28.172862   18221 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:28.244183   18221 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 2.893858ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002250333s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003631308s
addons_test.go:394: (dbg) Run:  kubectl --context addons-920118 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-920118 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-920118 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.695504869s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 ip
2025/12/16 04:27:51 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable registry --alsologtostderr -v=1: exit status 11 (236.055512ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:51.136628   21091 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:51.136923   21091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:51.136934   21091 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:51.136938   21091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:51.137185   21091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:51.137487   21091 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:51.137840   21091 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:51.137862   21091 addons.go:622] checking whether the cluster is paused
	I1216 04:27:51.137962   21091 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:51.137980   21091 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:51.138391   21091 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:51.157492   21091 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:51.157560   21091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:51.175841   21091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:51.266383   21091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:51.266464   21091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:51.294009   21091 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:51.294046   21091 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:51.294053   21091 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:51.294058   21091 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:51.294061   21091 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:51.294070   21091 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:51.294075   21091 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:51.294080   21091 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:51.294084   21091 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:51.294108   21091 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:51.294117   21091 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:51.294120   21091 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:51.294123   21091 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:51.294126   21091 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:51.294129   21091 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:51.294133   21091 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:51.294139   21091 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:51.294144   21091 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:51.294147   21091 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:51.294149   21091 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:51.294152   21091 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:51.294158   21091 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:51.294161   21091 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:51.294163   21091 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:51.294168   21091 cri.go:89] found id: ""
	I1216 04:27:51.294220   21091 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:51.307715   21091 out.go:203] 
	W1216 04:27:51.308945   21091 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:51.308965   21091 out.go:285] * 
	* 
	W1216 04:27:51.311923   21091 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:51.313197   21091 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.15s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.874871ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-920118
addons_test.go:334: (dbg) Run:  kubectl --context addons-920118 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (236.855365ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:47.943884   20611 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:47.944063   20611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:47.944073   20611 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:47.944078   20611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:47.944299   20611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:47.944556   20611 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:47.944862   20611 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:47.944879   20611 addons.go:622] checking whether the cluster is paused
	I1216 04:27:47.944958   20611 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:47.944968   20611 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:47.945415   20611 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:47.963680   20611 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:47.963732   20611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:47.981807   20611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:48.072793   20611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:48.072875   20611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:48.101087   20611 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:48.101109   20611 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:48.101113   20611 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:48.101116   20611 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:48.101119   20611 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:48.101122   20611 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:48.101125   20611 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:48.101128   20611 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:48.101131   20611 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:48.101138   20611 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:48.101142   20611 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:48.101148   20611 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:48.101152   20611 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:48.101157   20611 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:48.101162   20611 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:48.101178   20611 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:48.101189   20611 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:48.101196   20611 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:48.101200   20611 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:48.101205   20611 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:48.101212   20611 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:48.101215   20611 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:48.101217   20611 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:48.101220   20611 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:48.101223   20611 cri.go:89] found id: ""
	I1216 04:27:48.101265   20611 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:48.115269   20611 out.go:203] 
	W1216 04:27:48.116458   20611 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:48.116476   20611 out.go:285] * 
	* 
	W1216 04:27:48.119629   20611 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:48.121130   20611 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-920118 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-920118 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-920118 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [75077392-36dd-466a-ae5f-b36b97aaafaa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [75077392-36dd-466a-ae5f-b36b97aaafaa] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003553418s
I1216 04:27:50.871872    8592 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.619136148s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-920118 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-920118
helpers_test.go:244: (dbg) docker inspect addons-920118:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02",
	        "Created": "2025-12-16T04:25:48.17220649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 10997,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:25:48.206794545Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/hostname",
	        "HostsPath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/hosts",
	        "LogPath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02-json.log",
	        "Name": "/addons-920118",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-920118:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-920118",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02",
	                "LowerDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-920118",
	                "Source": "/var/lib/docker/volumes/addons-920118/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-920118",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-920118",
	                "name.minikube.sigs.k8s.io": "addons-920118",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8dce173bf4f1fcd01cd24763bbac8c8884decb5b2b08191a27e99254536ac19e",
	            "SandboxKey": "/var/run/docker/netns/8dce173bf4f1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-920118": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39d8d0633a2fff8310f5532936b9a77fc9a6621291b9e67d3291508ccb49b4f4",
	                    "EndpointID": "464bb78d2c6befd6118a2610a48962d5812608a15b1195285613f508c650a521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "e2:3a:0d:19:df:4b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-920118",
	                        "b481032a3d51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-920118 -n addons-920118
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-920118 logs -n 25: (1.098389081s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-231674 --alsologtostderr --binary-mirror http://127.0.0.1:36059 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-231674 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ -p binary-mirror-231674                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-231674 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ addons  │ enable dashboard -p addons-920118                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-920118                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ start   │ -p addons-920118 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:27 UTC │
	│ addons  │ addons-920118 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-920118 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-920118                                                                                                                                                                                                                                                                                                                                                                                           │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │ 16 Dec 25 04:27 UTC │
	│ addons  │ addons-920118 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ ssh     │ addons-920118 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ ip      │ addons-920118 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │ 16 Dec 25 04:27 UTC │
	│ addons  │ addons-920118 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ ssh     │ addons-920118 ssh cat /opt/local-path-provisioner/pvc-483bbdf6-b041-40f8-a75d-b907e63bd548_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │ 16 Dec 25 04:27 UTC │
	│ addons  │ addons-920118 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │                     │
	│ addons  │ addons-920118 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │                     │
	│ addons  │ addons-920118 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │                     │
	│ addons  │ addons-920118 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │                     │
	│ ip      │ addons-920118 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-920118        │ jenkins │ v1.37.0 │ 16 Dec 25 04:30 UTC │ 16 Dec 25 04:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:24.306358   10343 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:24.306576   10343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:24.306584   10343 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:24.306588   10343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:24.306752   10343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:25:24.307276   10343 out.go:368] Setting JSON to false
	I1216 04:25:24.308046   10343 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":475,"bootTime":1765858649,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:24.308114   10343 start.go:143] virtualization: kvm guest
	I1216 04:25:24.309884   10343 out.go:179] * [addons-920118] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:25:24.311115   10343 notify.go:221] Checking for updates...
	I1216 04:25:24.311143   10343 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:25:24.312224   10343 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:24.313258   10343 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:25:24.314375   10343 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:25:24.315408   10343 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:25:24.316518   10343 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:25:24.317700   10343 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:24.339458   10343 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:25:24.339605   10343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:24.392768   10343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 04:25:24.384035903 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:24.392877   10343 docker.go:319] overlay module found
	I1216 04:25:24.394500   10343 out.go:179] * Using the docker driver based on user configuration
	I1216 04:25:24.395565   10343 start.go:309] selected driver: docker
	I1216 04:25:24.395579   10343 start.go:927] validating driver "docker" against <nil>
	I1216 04:25:24.395588   10343 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:25:24.396245   10343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:24.452141   10343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 04:25:24.443254876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:24.452284   10343 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:24.452483   10343 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:25:24.454014   10343 out.go:179] * Using Docker driver with root privileges
	I1216 04:25:24.455179   10343 cni.go:84] Creating CNI manager for ""
	I1216 04:25:24.455240   10343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:25:24.455251   10343 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:25:24.455314   10343 start.go:353] cluster config:
	{Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 04:25:24.456557   10343 out.go:179] * Starting "addons-920118" primary control-plane node in "addons-920118" cluster
	I1216 04:25:24.457592   10343 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:25:24.458534   10343 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:25:24.459376   10343 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:24.459402   10343 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:24.459407   10343 cache.go:65] Caching tarball of preloaded images
	I1216 04:25:24.459456   10343 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:25:24.459482   10343 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 04:25:24.459490   10343 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:25:24.459790   10343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/config.json ...
	I1216 04:25:24.459815   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/config.json: {Name:mke945c6726d35da1068061ecb7d185de0f97ad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:24.475378   10343 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:25:24.475518   10343 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 04:25:24.475544   10343 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 04:25:24.475552   10343 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 04:25:24.475559   10343 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 04:25:24.475565   10343 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1216 04:25:37.148427   10343 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1216 04:25:37.148477   10343 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:25:37.148544   10343 start.go:360] acquireMachinesLock for addons-920118: {Name:mkb2ed6fe27dda6ced07935a09984425c540259b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:25:37.148658   10343 start.go:364] duration metric: took 90.869µs to acquireMachinesLock for "addons-920118"
	I1216 04:25:37.148688   10343 start.go:93] Provisioning new machine with config: &{Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:25:37.148773   10343 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:25:37.150520   10343 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 04:25:37.150790   10343 start.go:159] libmachine.API.Create for "addons-920118" (driver="docker")
	I1216 04:25:37.150822   10343 client.go:173] LocalClient.Create starting
	I1216 04:25:37.150949   10343 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 04:25:37.195198   10343 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 04:25:37.310407   10343 cli_runner.go:164] Run: docker network inspect addons-920118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:25:37.328600   10343 cli_runner.go:211] docker network inspect addons-920118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:25:37.328674   10343 network_create.go:284] running [docker network inspect addons-920118] to gather additional debugging logs...
	I1216 04:25:37.328699   10343 cli_runner.go:164] Run: docker network inspect addons-920118
	W1216 04:25:37.345527   10343 cli_runner.go:211] docker network inspect addons-920118 returned with exit code 1
	I1216 04:25:37.345554   10343 network_create.go:287] error running [docker network inspect addons-920118]: docker network inspect addons-920118: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-920118 not found
	I1216 04:25:37.345566   10343 network_create.go:289] output of [docker network inspect addons-920118]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-920118 not found
	
	** /stderr **
	I1216 04:25:37.345695   10343 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:25:37.361890   10343 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dad130}
	I1216 04:25:37.361925   10343 network_create.go:124] attempt to create docker network addons-920118 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 04:25:37.361987   10343 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-920118 addons-920118
	I1216 04:25:37.408654   10343 network_create.go:108] docker network addons-920118 192.168.49.0/24 created
	I1216 04:25:37.408683   10343 kic.go:121] calculated static IP "192.168.49.2" for the "addons-920118" container
	I1216 04:25:37.408744   10343 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:25:37.426169   10343 cli_runner.go:164] Run: docker volume create addons-920118 --label name.minikube.sigs.k8s.io=addons-920118 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:25:37.444124   10343 oci.go:103] Successfully created a docker volume addons-920118
	I1216 04:25:37.444225   10343 cli_runner.go:164] Run: docker run --rm --name addons-920118-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-920118 --entrypoint /usr/bin/test -v addons-920118:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 04:25:44.280842   10343 cli_runner.go:217] Completed: docker run --rm --name addons-920118-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-920118 --entrypoint /usr/bin/test -v addons-920118:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (6.836576098s)
	I1216 04:25:44.280873   10343 oci.go:107] Successfully prepared a docker volume addons-920118
	I1216 04:25:44.280960   10343 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:44.280976   10343 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:25:44.281054   10343 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-920118:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:25:48.096502   10343 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-920118:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.815406369s)
	I1216 04:25:48.096536   10343 kic.go:203] duration metric: took 3.815558003s to extract preloaded images to volume ...
	W1216 04:25:48.096637   10343 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 04:25:48.096671   10343 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 04:25:48.096709   10343 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:25:48.157180   10343 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-920118 --name addons-920118 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-920118 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-920118 --network addons-920118 --ip 192.168.49.2 --volume addons-920118:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 04:25:48.452062   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Running}}
	I1216 04:25:48.471658   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:25:48.490094   10343 cli_runner.go:164] Run: docker exec addons-920118 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:25:48.542353   10343 oci.go:144] the created container "addons-920118" has a running status.
	I1216 04:25:48.542389   10343 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa...
	I1216 04:25:48.627941   10343 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:25:48.652189   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:25:48.670781   10343 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:25:48.670806   10343 kic_runner.go:114] Args: [docker exec --privileged addons-920118 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:25:48.716085   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:25:48.737416   10343 machine.go:94] provisionDockerMachine start ...
	I1216 04:25:48.737528   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:48.762882   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:48.763237   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:48.763264   10343 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:25:48.764805   10343 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34766->127.0.0.1:32768: read: connection reset by peer
	I1216 04:25:51.890870   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-920118
	
	I1216 04:25:51.890894   10343 ubuntu.go:182] provisioning hostname "addons-920118"
	I1216 04:25:51.890989   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:51.908485   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:51.908747   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:51.908761   10343 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-920118 && echo "addons-920118" | sudo tee /etc/hostname
	I1216 04:25:52.042142   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-920118
	
	I1216 04:25:52.042218   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.060680   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:52.060888   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:52.060903   10343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-920118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-920118/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-920118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:25:52.185763   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:25:52.185789   10343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 04:25:52.185812   10343 ubuntu.go:190] setting up certificates
	I1216 04:25:52.185827   10343 provision.go:84] configureAuth start
	I1216 04:25:52.185896   10343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-920118
	I1216 04:25:52.203118   10343 provision.go:143] copyHostCerts
	I1216 04:25:52.203190   10343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 04:25:52.203314   10343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 04:25:52.203376   10343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 04:25:52.203426   10343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.addons-920118 san=[127.0.0.1 192.168.49.2 addons-920118 localhost minikube]
	I1216 04:25:52.280302   10343 provision.go:177] copyRemoteCerts
	I1216 04:25:52.280353   10343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:25:52.280386   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.297383   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:52.387974   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:25:52.406952   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 04:25:52.423669   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:25:52.439998   10343 provision.go:87] duration metric: took 254.147423ms to configureAuth
	I1216 04:25:52.440038   10343 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:25:52.440203   10343 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:25:52.440318   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.457699   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:52.457964   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:52.457982   10343 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:25:52.720646   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:25:52.720669   10343 machine.go:97] duration metric: took 3.983227082s to provisionDockerMachine
	I1216 04:25:52.720680   10343 client.go:176] duration metric: took 15.569851959s to LocalClient.Create
	I1216 04:25:52.720699   10343 start.go:167] duration metric: took 15.569910426s to libmachine.API.Create "addons-920118"
	I1216 04:25:52.720709   10343 start.go:293] postStartSetup for "addons-920118" (driver="docker")
	I1216 04:25:52.720722   10343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:25:52.720776   10343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:25:52.720821   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.738407   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:52.831914   10343 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:25:52.835309   10343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:25:52.835335   10343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:25:52.835345   10343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 04:25:52.835395   10343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 04:25:52.835418   10343 start.go:296] duration metric: took 114.703208ms for postStartSetup
	I1216 04:25:52.835690   10343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-920118
	I1216 04:25:52.852813   10343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/config.json ...
	I1216 04:25:52.853106   10343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:25:52.853145   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.870197   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:52.957892   10343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:25:52.962170   10343 start.go:128] duration metric: took 15.813382554s to createHost
	I1216 04:25:52.962215   10343 start.go:83] releasing machines lock for "addons-920118", held for 15.813541626s
	I1216 04:25:52.962285   10343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-920118
	I1216 04:25:52.979699   10343 ssh_runner.go:195] Run: cat /version.json
	I1216 04:25:52.979746   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.979799   10343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:25:52.979866   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:53.000266   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:53.000267   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:53.142592   10343 ssh_runner.go:195] Run: systemctl --version
	I1216 04:25:53.148701   10343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:25:53.183626   10343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:25:53.188170   10343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:25:53.188242   10343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:25:53.213072   10343 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 04:25:53.213092   10343 start.go:496] detecting cgroup driver to use...
	I1216 04:25:53.213120   10343 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 04:25:53.213171   10343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:25:53.228428   10343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:25:53.240521   10343 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:25:53.240583   10343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:25:53.255818   10343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:25:53.273135   10343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:25:53.356263   10343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:25:53.439475   10343 docker.go:234] disabling docker service ...
	I1216 04:25:53.439535   10343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:25:53.457365   10343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:25:53.469667   10343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:25:53.550013   10343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:25:53.628535   10343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:25:53.640364   10343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:25:53.653242   10343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:25:53.653293   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.663404   10343 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 04:25:53.663457   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.672117   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.680079   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.688819   10343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:25:53.696770   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.705612   10343 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.718875   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.727101   10343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:25:53.733941   10343 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 04:25:53.733982   10343 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 04:25:53.745968   10343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:25:53.754888   10343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:25:53.834062   10343 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:25:53.969642   10343 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:25:53.969726   10343 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:25:53.973558   10343 start.go:564] Will wait 60s for crictl version
	I1216 04:25:53.973618   10343 ssh_runner.go:195] Run: which crictl
	I1216 04:25:53.977126   10343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:25:54.001480   10343 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:25:54.001610   10343 ssh_runner.go:195] Run: crio --version
	I1216 04:25:54.029484   10343 ssh_runner.go:195] Run: crio --version
	I1216 04:25:54.059887   10343 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 04:25:54.060989   10343 cli_runner.go:164] Run: docker network inspect addons-920118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:25:54.078626   10343 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:25:54.082630   10343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:25:54.092334   10343 kubeadm.go:884] updating cluster {Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:25:54.092442   10343 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:54.092495   10343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:25:54.123722   10343 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:25:54.123743   10343 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:25:54.123785   10343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:25:54.148910   10343 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:25:54.148931   10343 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:25:54.148939   10343 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 04:25:54.149047   10343 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-920118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:25:54.149115   10343 ssh_runner.go:195] Run: crio config
	I1216 04:25:54.192561   10343 cni.go:84] Creating CNI manager for ""
	I1216 04:25:54.192591   10343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:25:54.192606   10343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:25:54.192628   10343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-920118 NodeName:addons-920118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:25:54.192748   10343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-920118"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:25:54.192807   10343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 04:25:54.200501   10343 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:25:54.200569   10343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:25:54.207908   10343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 04:25:54.220096   10343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 04:25:54.234558   10343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 04:25:54.246381   10343 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:25:54.249615   10343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:25:54.259147   10343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:25:54.331825   10343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:25:54.357421   10343 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118 for IP: 192.168.49.2
	I1216 04:25:54.357447   10343 certs.go:195] generating shared ca certs ...
	I1216 04:25:54.357465   10343 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.357601   10343 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 04:25:54.540247   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt ...
	I1216 04:25:54.540276   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt: {Name:mk29534da1360337bfbee56ec9908cf197b6a751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.540456   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key ...
	I1216 04:25:54.540466   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key: {Name:mk6b443e82720727e2eac180e3464d924488c446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.540536   10343 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 04:25:54.597807   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt ...
	I1216 04:25:54.597835   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt: {Name:mk7e765cfabf8a69146f9a1605dc7c1b7ad36407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.597998   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key ...
	I1216 04:25:54.598011   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key: {Name:mkcc6a4851601e8ab1dfec327bd3214ef09d13d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.598093   10343 certs.go:257] generating profile certs ...
	I1216 04:25:54.598145   10343 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.key
	I1216 04:25:54.598158   10343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt with IP's: []
	I1216 04:25:54.736851   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt ...
	I1216 04:25:54.736884   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: {Name:mk13e163a196d53ed0dd1104bdd79cad8883d4ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.737075   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.key ...
	I1216 04:25:54.737088   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.key: {Name:mk4910ee3534a83b82f4bac6b5f9a0531c23ca30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.737163   10343 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a
	I1216 04:25:54.737182   10343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 04:25:54.845968   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a ...
	I1216 04:25:54.846002   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a: {Name:mkb3cd644d39010340a045d4cda0c63bb87485c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.846199   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a ...
	I1216 04:25:54.846214   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a: {Name:mkb9634b38fd8e921118fb77a1d7725c1708ee92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.846299   10343 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt
	I1216 04:25:54.846378   10343 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key
	I1216 04:25:54.846483   10343 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key
	I1216 04:25:54.846508   10343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt with IP's: []
	I1216 04:25:54.914645   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt ...
	I1216 04:25:54.914678   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt: {Name:mk2d5a1b6fdc389f6f1e6b1f234405f60337d44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.914860   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key ...
	I1216 04:25:54.914873   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key: {Name:mk12b693a31445ee37043a8da8505d9712041382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.915083   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:25:54.915126   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:25:54.915156   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:25:54.915182   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 04:25:54.915828   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:25:54.934462   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:25:54.951197   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:25:54.967624   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 04:25:54.984244   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 04:25:55.000439   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:25:55.017028   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:25:55.034097   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:25:55.050419   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:25:55.069154   10343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:25:55.081142   10343 ssh_runner.go:195] Run: openssl version
	I1216 04:25:55.086858   10343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.093779   10343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:25:55.103730   10343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.107590   10343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.107638   10343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.141566   10343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:25:55.149314   10343 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:25:55.156729   10343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:25:55.160184   10343 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:25:55.160233   10343 kubeadm.go:401] StartCluster: {Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:25:55.160335   10343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:25:55.160379   10343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:25:55.186913   10343 cri.go:89] found id: ""
	I1216 04:25:55.186992   10343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:25:55.195043   10343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:25:55.203106   10343 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:25:55.203163   10343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:25:55.210271   10343 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:25:55.210291   10343 kubeadm.go:158] found existing configuration files:
	
	I1216 04:25:55.210333   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:25:55.217417   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:25:55.217485   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:25:55.224646   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:25:55.231696   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:25:55.231765   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:25:55.238545   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:25:55.245603   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:25:55.245661   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:25:55.253005   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:25:55.260403   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:25:55.260448   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:25:55.268111   10343 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:25:55.324561   10343 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 04:25:55.380198   10343 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:26:04.213886   10343 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 04:26:04.213938   10343 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:26:04.214066   10343 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:26:04.214167   10343 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 04:26:04.214234   10343 kubeadm.go:319] OS: Linux
	I1216 04:26:04.214312   10343 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:26:04.214380   10343 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:26:04.214422   10343 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:26:04.214461   10343 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:26:04.214504   10343 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:26:04.214542   10343 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:26:04.214587   10343 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:26:04.214629   10343 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 04:26:04.214690   10343 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:26:04.214781   10343 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:26:04.214920   10343 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:26:04.215039   10343 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:26:04.217766   10343 out.go:252]   - Generating certificates and keys ...
	I1216 04:26:04.217858   10343 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:26:04.217938   10343 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:26:04.218076   10343 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:26:04.218176   10343 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:26:04.218249   10343 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:26:04.218290   10343 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:26:04.218340   10343 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:26:04.218472   10343 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-920118 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:26:04.218545   10343 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:26:04.218685   10343 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-920118 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:26:04.218794   10343 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:26:04.218892   10343 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:26:04.218956   10343 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:26:04.219062   10343 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:26:04.219128   10343 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:26:04.219208   10343 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:26:04.219253   10343 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:26:04.219349   10343 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:26:04.219409   10343 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:26:04.219474   10343 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:26:04.219527   10343 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:26:04.220769   10343 out.go:252]   - Booting up control plane ...
	I1216 04:26:04.220871   10343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:26:04.220953   10343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:26:04.221047   10343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:26:04.221148   10343 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:26:04.221258   10343 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:26:04.221389   10343 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:26:04.221478   10343 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:26:04.221535   10343 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:26:04.221717   10343 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:26:04.221865   10343 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:26:04.221942   10343 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00146928s
	I1216 04:26:04.222087   10343 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 04:26:04.222215   10343 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 04:26:04.222326   10343 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 04:26:04.222451   10343 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 04:26:04.222557   10343 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.241910337s
	I1216 04:26:04.222642   10343 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.935449434s
	I1216 04:26:04.222724   10343 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501982481s
	I1216 04:26:04.222882   10343 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 04:26:04.223009   10343 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 04:26:04.223076   10343 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 04:26:04.223280   10343 kubeadm.go:319] [mark-control-plane] Marking the node addons-920118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 04:26:04.223368   10343 kubeadm.go:319] [bootstrap-token] Using token: ts7uag.97mgvhceg9rw9a8w
	I1216 04:26:04.224900   10343 out.go:252]   - Configuring RBAC rules ...
	I1216 04:26:04.225049   10343 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 04:26:04.225140   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 04:26:04.225302   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 04:26:04.225461   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 04:26:04.225562   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 04:26:04.225630   10343 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 04:26:04.225768   10343 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 04:26:04.225821   10343 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 04:26:04.225889   10343 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 04:26:04.225897   10343 kubeadm.go:319] 
	I1216 04:26:04.225947   10343 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 04:26:04.225953   10343 kubeadm.go:319] 
	I1216 04:26:04.226036   10343 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 04:26:04.226044   10343 kubeadm.go:319] 
	I1216 04:26:04.226065   10343 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 04:26:04.226123   10343 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 04:26:04.226200   10343 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 04:26:04.226209   10343 kubeadm.go:319] 
	I1216 04:26:04.226292   10343 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 04:26:04.226302   10343 kubeadm.go:319] 
	I1216 04:26:04.226374   10343 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 04:26:04.226383   10343 kubeadm.go:319] 
	I1216 04:26:04.226446   10343 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 04:26:04.226525   10343 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 04:26:04.226628   10343 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 04:26:04.226635   10343 kubeadm.go:319] 
	I1216 04:26:04.226735   10343 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 04:26:04.226836   10343 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 04:26:04.226845   10343 kubeadm.go:319] 
	I1216 04:26:04.226918   10343 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ts7uag.97mgvhceg9rw9a8w \
	I1216 04:26:04.227052   10343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 04:26:04.227085   10343 kubeadm.go:319] 	--control-plane 
	I1216 04:26:04.227092   10343 kubeadm.go:319] 
	I1216 04:26:04.227216   10343 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 04:26:04.227227   10343 kubeadm.go:319] 
	I1216 04:26:04.227336   10343 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ts7uag.97mgvhceg9rw9a8w \
	I1216 04:26:04.227491   10343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 04:26:04.227511   10343 cni.go:84] Creating CNI manager for ""
	I1216 04:26:04.227520   10343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:26:04.228974   10343 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 04:26:04.230086   10343 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 04:26:04.234344   10343 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 04:26:04.234359   10343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 04:26:04.247361   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 04:26:04.443014   10343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 04:26:04.443287   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-920118 minikube.k8s.io/updated_at=2025_12_16T04_26_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=addons-920118 minikube.k8s.io/primary=true
	I1216 04:26:04.443287   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:04.454403   10343 ops.go:34] apiserver oom_adj: -16
	I1216 04:26:04.528477   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:05.028714   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:05.529176   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:06.028718   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:06.528807   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:07.028557   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:07.529504   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:08.029377   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:08.528523   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:09.029464   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:09.094694   10343 kubeadm.go:1114] duration metric: took 4.651462414s to wait for elevateKubeSystemPrivileges
	I1216 04:26:09.094734   10343 kubeadm.go:403] duration metric: took 13.934502954s to StartCluster
	I1216 04:26:09.094754   10343 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:09.094876   10343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:26:09.095328   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:09.095517   10343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 04:26:09.095529   10343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:26:09.095614   10343 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 04:26:09.095720   10343 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:26:09.095744   10343 addons.go:70] Setting yakd=true in profile "addons-920118"
	I1216 04:26:09.095766   10343 addons.go:239] Setting addon yakd=true in "addons-920118"
	I1216 04:26:09.095772   10343 addons.go:70] Setting gcp-auth=true in profile "addons-920118"
	I1216 04:26:09.095781   10343 addons.go:70] Setting metrics-server=true in profile "addons-920118"
	I1216 04:26:09.095766   10343 addons.go:70] Setting default-storageclass=true in profile "addons-920118"
	I1216 04:26:09.095797   10343 addons.go:239] Setting addon metrics-server=true in "addons-920118"
	I1216 04:26:09.095800   10343 mustload.go:66] Loading cluster: addons-920118
	I1216 04:26:09.095806   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.095815   10343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-920118"
	I1216 04:26:09.095826   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.095908   10343 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-920118"
	I1216 04:26:09.095945   10343 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-920118"
	I1216 04:26:09.095976   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.096014   10343 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:26:09.096133   10343 addons.go:70] Setting cloud-spanner=true in profile "addons-920118"
	I1216 04:26:09.096205   10343 addons.go:239] Setting addon cloud-spanner=true in "addons-920118"
	I1216 04:26:09.096240   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096251   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.096291   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096388   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096397   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096457   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096593   10343 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-920118"
	I1216 04:26:09.096639   10343 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-920118"
	I1216 04:26:09.096671   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.096673   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.097704   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.095774   10343 addons.go:70] Setting registry-creds=true in profile "addons-920118"
	I1216 04:26:09.098308   10343 addons.go:239] Setting addon registry-creds=true in "addons-920118"
	I1216 04:26:09.098352   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.098742   10343 out.go:179] * Verifying Kubernetes components...
	I1216 04:26:09.098784   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.097761   10343 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-920118"
	I1216 04:26:09.099127   10343 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-920118"
	I1216 04:26:09.099156   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.103182   10343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:26:09.095763   10343 addons.go:70] Setting inspektor-gadget=true in profile "addons-920118"
	I1216 04:26:09.103452   10343 addons.go:239] Setting addon inspektor-gadget=true in "addons-920118"
	I1216 04:26:09.103511   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097780   10343 addons.go:70] Setting storage-provisioner=true in profile "addons-920118"
	I1216 04:26:09.103647   10343 addons.go:239] Setting addon storage-provisioner=true in "addons-920118"
	I1216 04:26:09.103794   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097790   10343 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-920118"
	I1216 04:26:09.104123   10343 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-920118"
	I1216 04:26:09.097804   10343 addons.go:70] Setting volcano=true in profile "addons-920118"
	I1216 04:26:09.104443   10343 addons.go:239] Setting addon volcano=true in "addons-920118"
	I1216 04:26:09.104491   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.104545   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097812   10343 addons.go:70] Setting volumesnapshots=true in profile "addons-920118"
	I1216 04:26:09.104721   10343 addons.go:239] Setting addon volumesnapshots=true in "addons-920118"
	I1216 04:26:09.104749   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.105037   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.105196   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.105232   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.097822   10343 addons.go:70] Setting registry=true in profile "addons-920118"
	I1216 04:26:09.097852   10343 addons.go:70] Setting ingress-dns=true in profile "addons-920118"
	I1216 04:26:09.104051   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.105587   10343 addons.go:239] Setting addon registry=true in "addons-920118"
	I1216 04:26:09.105618   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.105905   10343 addons.go:239] Setting addon ingress-dns=true in "addons-920118"
	I1216 04:26:09.106004   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097846   10343 addons.go:70] Setting ingress=true in profile "addons-920118"
	I1216 04:26:09.106065   10343 addons.go:239] Setting addon ingress=true in "addons-920118"
	I1216 04:26:09.106098   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.106444   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.106569   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.106962   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.118450   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.146871   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 04:26:09.148309   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 04:26:09.150645   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 04:26:09.151917   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 04:26:09.153251   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 04:26:09.154432   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 04:26:09.156794   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 04:26:09.159056   10343 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 04:26:09.160461   10343 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:26:09.160480   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 04:26:09.160551   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.161133   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 04:26:09.163570   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 04:26:09.165612   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 04:26:09.166864   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.167374   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.170713   10343 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 04:26:09.171489   10343 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 04:26:09.171515   10343 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 04:26:09.172629   10343 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 04:26:09.172643   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 04:26:09.172701   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.173165   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 04:26:09.173179   10343 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 04:26:09.173235   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.173301   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 04:26:09.173308   10343 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 04:26:09.173342   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	W1216 04:26:09.193411   10343 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 04:26:09.194489   10343 addons.go:239] Setting addon default-storageclass=true in "addons-920118"
	I1216 04:26:09.195348   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.196151   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.196218   10343 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 04:26:09.197156   10343 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 04:26:09.198056   10343 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:26:09.198078   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 04:26:09.198144   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.203389   10343 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:26:09.203410   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 04:26:09.203458   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.203563   10343 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 04:26:09.203916   10343 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-920118"
	I1216 04:26:09.204133   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.205657   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.205981   10343 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:26:09.206049   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 04:26:09.206113   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.216063   10343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:26:09.217576   10343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:26:09.218669   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:26:09.218748   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.231512   10343 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 04:26:09.232604   10343 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:26:09.234111   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 04:26:09.234835   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.237779   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.240980   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:09.244063   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 04:26:09.245970   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:09.247400   10343 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:26:09.248319   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 04:26:09.248430   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.248200   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 04:26:09.249674   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 04:26:09.250251   10343 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 04:26:09.250342   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.254875   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.258680   10343 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 04:26:09.260227   10343 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 04:26:09.261995   10343 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 04:26:09.262100   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 04:26:09.262173   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.269060   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.280180   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.280302   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.281051   10343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 04:26:09.282756   10343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:26:09.282773   10343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:26:09.282826   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.283918   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.286372   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.303490   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.306112   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.308930   10343 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 04:26:09.310469   10343 out.go:179]   - Using image docker.io/busybox:stable
	I1216 04:26:09.311124   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.311754   10343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:26:09.311773   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 04:26:09.311831   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.314132   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.317213   10343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1216 04:26:09.325169   10343 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:26:09.325205   10343 retry.go:31] will retry after 206.507609ms: ssh: handshake failed: EOF
	I1216 04:26:09.325694   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.334557   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.343270   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.359213   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	W1216 04:26:09.361904   10343 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:26:09.361975   10343 retry.go:31] will retry after 133.463129ms: ssh: handshake failed: EOF
	I1216 04:26:09.432946   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:26:09.433274   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:26:09.444407   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 04:26:09.444439   10343 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 04:26:09.445750   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 04:26:09.451457   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 04:26:09.451480   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 04:26:09.467823   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 04:26:09.467857   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 04:26:09.470261   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 04:26:09.470285   10343 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 04:26:09.489162   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 04:26:09.489236   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 04:26:09.493394   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:26:09.493394   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:26:09.494942   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:26:09.507967   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 04:26:09.508013   10343 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 04:26:09.509471   10343 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 04:26:09.509498   10343 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 04:26:09.511477   10343 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 04:26:09.511503   10343 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 04:26:09.511800   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:26:09.520537   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:26:09.521923   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 04:26:09.521942   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 04:26:09.522766   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 04:26:09.522780   10343 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 04:26:09.550656   10343 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 04:26:09.550682   10343 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 04:26:09.560861   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:26:09.560887   10343 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 04:26:09.561919   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:26:09.561942   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 04:26:09.563281   10343 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:26:09.563297   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 04:26:09.565091   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 04:26:09.565105   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 04:26:09.605712   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:26:09.610139   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 04:26:09.610172   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 04:26:09.617936   10343 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 04:26:09.617961   10343 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 04:26:09.653345   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:26:09.659622   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:26:09.660410   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 04:26:09.660473   10343 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 04:26:09.686914   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 04:26:09.686936   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 04:26:09.711606   10343 node_ready.go:35] waiting up to 6m0s for node "addons-920118" to be "Ready" ...
	I1216 04:26:09.711858   10343 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 04:26:09.733427   10343 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:09.733446   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 04:26:09.760180   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 04:26:09.760209   10343 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 04:26:09.763355   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:26:09.768944   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:26:09.785946   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:09.808497   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 04:26:09.808530   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 04:26:09.884279   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 04:26:09.884309   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 04:26:09.952732   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:26:09.952778   10343 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 04:26:10.002649   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:26:10.230227   10343 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-920118" context rescaled to 1 replicas
	I1216 04:26:10.443325   10343 addons.go:495] Verifying addon registry=true in "addons-920118"
	I1216 04:26:10.443426   10343 addons.go:495] Verifying addon metrics-server=true in "addons-920118"
	I1216 04:26:10.445007   10343 out.go:179] * Verifying registry addon...
	I1216 04:26:10.453444   10343 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 04:26:10.457034   10343 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:26:10.457058   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:10.475269   10343 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-920118 service yakd-dashboard -n yakd-dashboard
	
	I1216 04:26:10.746233   10343 addons.go:495] Verifying addon ingress=true in "addons-920118"
	I1216 04:26:10.747769   10343 out.go:179] * Verifying ingress addon...
	I1216 04:26:10.750207   10343 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 04:26:10.752850   10343 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 04:26:10.752869   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:10.956932   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:11.113511   10343 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.327516819s)
	W1216 04:26:11.113550   10343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:26:11.113581   10343 retry.go:31] will retry after 196.988358ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:26:11.113788   10343 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.111091655s)
	I1216 04:26:11.113821   10343 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-920118"
	I1216 04:26:11.115494   10343 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 04:26:11.118943   10343 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 04:26:11.121280   10343 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:26:11.121304   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:11.253696   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:11.310775   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:11.457538   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:11.621550   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:11.715011   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:11.753759   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:11.956383   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:12.122369   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:12.253342   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:12.456823   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:12.622556   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:12.753452   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:12.956488   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:13.122444   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:13.252512   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:13.456705   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:13.622213   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:13.753533   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:13.775886   10343 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.465069783s)
	I1216 04:26:13.957191   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:14.122798   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:14.214904   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:14.254207   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:14.456672   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:14.621764   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:14.752971   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:14.957109   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:15.122713   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:15.253541   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:15.457072   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:15.622665   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:15.753892   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:15.957108   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:16.123009   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:16.252929   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:16.456925   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:16.622153   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:16.714478   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:16.752830   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:16.795089   10343 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 04:26:16.795150   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:16.812594   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:16.910740   10343 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 04:26:16.923575   10343 addons.go:239] Setting addon gcp-auth=true in "addons-920118"
	I1216 04:26:16.923625   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:16.923947   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:16.940686   10343 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 04:26:16.940737   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:16.956873   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:16.958572   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:17.049787   10343 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 04:26:17.051358   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:17.052425   10343 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 04:26:17.052439   10343 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 04:26:17.066253   10343 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 04:26:17.066276   10343 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 04:26:17.079687   10343 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:26:17.079708   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 04:26:17.092361   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:26:17.122086   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:17.253221   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:17.385523   10343 addons.go:495] Verifying addon gcp-auth=true in "addons-920118"
	I1216 04:26:17.387747   10343 out.go:179] * Verifying gcp-auth addon...
	I1216 04:26:17.389442   10343 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 04:26:17.391860   10343 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 04:26:17.391884   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:17.492514   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:17.622192   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:17.753006   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:17.892517   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:17.957263   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:18.122289   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:18.253352   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:18.392882   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:18.456428   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:18.621939   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:18.714629   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:18.753088   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:18.892581   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:18.956111   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:19.121572   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:19.253802   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:19.392632   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:19.456881   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:19.621481   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:19.753400   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:19.892991   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:19.956528   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:20.122315   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:20.253463   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:20.393094   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:20.456498   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:20.622072   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:20.752698   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:20.892169   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:20.956455   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:21.122187   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:21.214517   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:21.253059   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:21.392454   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:21.457095   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:21.621591   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:21.753450   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:21.892131   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:21.956462   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:22.122127   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:22.252827   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:22.392299   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:22.456819   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:22.623770   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:22.753968   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:22.892469   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:22.955943   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:23.121488   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:23.214826   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:23.253343   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:23.392715   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:23.456490   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:23.622035   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:23.753088   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:23.892592   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:23.956695   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:24.122355   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:24.253377   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:24.392139   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:24.456411   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:24.622403   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:24.753495   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:24.892891   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:24.956436   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:25.122167   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:25.253142   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:25.392873   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:25.456324   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:25.621820   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:25.713975   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:25.753596   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:25.891928   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:25.956261   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:26.121984   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:26.252764   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:26.392361   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:26.456724   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:26.622496   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:26.753979   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:26.892477   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:26.956893   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:27.121455   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:27.253424   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:27.392940   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:27.456393   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:27.621940   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:27.714364   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:27.752705   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:27.892586   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:27.955804   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:28.122392   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:28.253453   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:28.392099   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:28.456660   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:28.622438   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:28.753460   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:28.893132   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:28.956593   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:29.122151   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:29.253245   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:29.392967   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:29.456315   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:29.623944   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:29.714479   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:29.752946   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:29.892305   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:29.956882   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:30.121511   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:30.253054   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:30.392506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:30.456900   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:30.622548   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:30.753439   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:30.893439   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:30.956613   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:31.122049   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:31.252668   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:31.392476   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:31.465523   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:31.622344   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:31.714657   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:31.753214   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:31.892805   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:31.956316   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:32.122093   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:32.254404   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:32.392914   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:32.456530   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:32.622577   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:32.753198   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:32.892578   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:32.955682   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:33.122045   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:33.252862   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:33.392273   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:33.456676   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:33.622334   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:33.753077   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:33.892468   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:33.956744   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:34.123127   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:34.214559   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:34.253122   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:34.392880   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:34.456351   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:34.622059   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:34.752799   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:34.892829   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:34.956072   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:35.121688   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:35.253390   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:35.392879   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:35.456172   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:35.621976   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:35.753627   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:35.891928   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:35.956067   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:36.121506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:36.214860   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:36.253230   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:36.392728   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:36.456203   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:36.621682   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:36.753676   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:36.892187   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:36.956538   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:37.122173   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:37.253315   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:37.393125   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:37.456308   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:37.622087   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:37.753572   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:37.892124   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:37.956648   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:38.123859   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:38.252872   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:38.392561   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:38.455735   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:38.622305   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:38.714783   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:38.753582   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:38.891888   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:38.956506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:39.122045   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:39.253039   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:39.393150   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:39.456570   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:39.622102   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:39.753329   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:39.892800   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:39.956180   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:40.121988   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:40.252835   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:40.392333   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:40.457154   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:40.621736   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:40.753471   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:40.891989   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:40.956345   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:41.122041   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:41.214392   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:41.252721   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:41.392327   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:41.456703   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:41.622283   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:41.753266   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:41.892668   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:41.956109   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:42.121710   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:42.253641   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:42.391917   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:42.456546   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:42.622514   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:42.753612   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:42.891837   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:42.956343   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:43.121954   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:43.252591   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:43.392045   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:43.456245   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:43.621674   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:43.713937   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:43.753391   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:43.892858   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:43.956245   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:44.121889   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:44.253034   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:44.392594   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:44.456046   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:44.621572   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:44.753476   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:44.892918   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:44.956287   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:45.121791   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:45.253509   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:45.391875   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:45.456304   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:45.621824   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:45.714298   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:45.752709   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:45.892260   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:45.956656   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:46.122095   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:46.253099   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:46.392642   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:46.456033   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:46.621527   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:46.753484   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:46.892696   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:46.956161   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:47.121854   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:47.253598   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:47.392184   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:47.456516   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:47.621990   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:47.714617   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:47.753136   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:47.892685   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:47.956177   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:48.121851   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:48.252719   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:48.392180   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:48.456572   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:48.622343   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:48.753363   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:48.892562   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:48.955871   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:49.122354   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:49.253578   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:49.392338   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:49.456706   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:49.622537   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:49.715036   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:49.753689   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:49.892058   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:49.956347   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:50.122203   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:50.252857   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:50.392214   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:50.456735   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:50.622202   10343 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:26:50.622224   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:50.714226   10343 node_ready.go:49] node "addons-920118" is "Ready"
	I1216 04:26:50.714262   10343 node_ready.go:38] duration metric: took 41.002627295s for node "addons-920118" to be "Ready" ...
	I1216 04:26:50.714278   10343 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:26:50.714321   10343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:26:50.727476   10343 api_server.go:72] duration metric: took 41.63192077s to wait for apiserver process to appear ...
	I1216 04:26:50.727499   10343 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:26:50.727516   10343 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 04:26:50.732171   10343 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 04:26:50.733057   10343 api_server.go:141] control plane version: v1.34.2
	I1216 04:26:50.733079   10343 api_server.go:131] duration metric: took 5.574386ms to wait for apiserver health ...
	I1216 04:26:50.733088   10343 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 04:26:50.736580   10343 system_pods.go:59] 20 kube-system pods found
	I1216 04:26:50.736609   10343 system_pods.go:61] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:50.736617   10343 system_pods.go:61] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:50.736625   10343 system_pods.go:61] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:50.736630   10343 system_pods.go:61] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:50.736635   10343 system_pods.go:61] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:50.736641   10343 system_pods.go:61] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:50.736648   10343 system_pods.go:61] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:50.736651   10343 system_pods.go:61] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:50.736654   10343 system_pods.go:61] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:50.736659   10343 system_pods.go:61] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:50.736665   10343 system_pods.go:61] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:50.736668   10343 system_pods.go:61] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:50.736672   10343 system_pods.go:61] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:50.736677   10343 system_pods.go:61] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:50.736685   10343 system_pods.go:61] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:50.736690   10343 system_pods.go:61] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:50.736697   10343 system_pods.go:61] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:50.736704   10343 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.736710   10343 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.736716   10343 system_pods.go:61] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:50.736723   10343 system_pods.go:74] duration metric: took 3.630467ms to wait for pod list to return data ...
	I1216 04:26:50.736730   10343 default_sa.go:34] waiting for default service account to be created ...
	I1216 04:26:50.738453   10343 default_sa.go:45] found service account: "default"
	I1216 04:26:50.738471   10343 default_sa.go:55] duration metric: took 1.734758ms for default service account to be created ...
	I1216 04:26:50.738481   10343 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 04:26:50.742284   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:50.742318   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:50.742336   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:50.742349   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:50.742358   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:50.742444   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:50.742463   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:50.742478   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:50.742490   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:50.742497   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:50.742512   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:50.742523   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:50.742539   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:50.742548   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:50.742559   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:50.742575   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:50.742589   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:50.742613   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:50.742627   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.742635   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.742647   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:50.742663   10343 retry.go:31] will retry after 228.794687ms: missing components: kube-dns
	I1216 04:26:50.752566   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:50.893282   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:50.993298   10343 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:26:50.993322   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:50.994536   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:50.994563   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:50.994570   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:50.994577   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:50.994585   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:50.994590   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:50.994594   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:50.994598   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:50.994603   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:50.994607   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:50.994613   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:50.994618   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:50.994621   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:50.994626   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:50.994633   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:50.994638   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:50.994646   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:50.994651   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:50.994657   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.994663   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.994669   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:50.994681   10343 retry.go:31] will retry after 303.0198ms: missing components: kube-dns
	I1216 04:26:51.121983   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:51.253728   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:51.356158   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:51.356194   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:51.356202   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:51.356209   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:51.356215   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:51.356222   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:51.356227   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:51.356232   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:51.356236   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:51.356239   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:51.356245   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:51.356250   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:51.356256   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:51.356263   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:51.356269   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:51.356276   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:51.356281   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:51.356288   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:51.356293   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.356299   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.356306   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:51.356323   10343 retry.go:31] will retry after 362.929112ms: missing components: kube-dns
	I1216 04:26:51.391947   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:51.456758   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:51.623868   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:51.723437   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:51.723470   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:51.723479   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Running
	I1216 04:26:51.723491   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:51.723500   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:51.723509   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:51.723515   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:51.723521   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:51.723526   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:51.723531   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:51.723539   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:51.723546   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:51.723550   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:51.723555   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:51.723565   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:51.723571   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:51.723580   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:51.723586   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:51.723590   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.723598   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.723603   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Running
	I1216 04:26:51.723611   10343 system_pods.go:126] duration metric: took 985.124018ms to wait for k8s-apps to be running ...
	I1216 04:26:51.723621   10343 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 04:26:51.723663   10343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:26:51.739330   10343 system_svc.go:56] duration metric: took 15.699551ms WaitForService to wait for kubelet
	I1216 04:26:51.739362   10343 kubeadm.go:587] duration metric: took 42.643807434s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:26:51.739385   10343 node_conditions.go:102] verifying NodePressure condition ...
	I1216 04:26:51.741797   10343 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 04:26:51.741819   10343 node_conditions.go:123] node cpu capacity is 8
	I1216 04:26:51.741837   10343 node_conditions.go:105] duration metric: took 2.445119ms to run NodePressure ...
	I1216 04:26:51.741848   10343 start.go:242] waiting for startup goroutines ...
	I1216 04:26:51.753185   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:51.892788   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:51.956779   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:52.123150   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:52.254362   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:52.393527   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:52.456335   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:52.623114   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:52.753725   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:52.892497   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:52.957406   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:53.123305   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:53.254174   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:53.392845   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:53.456559   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:53.622846   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:53.753935   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:53.892522   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:53.957487   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:54.122776   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:54.253795   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:54.393411   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:54.457179   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:54.622503   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:54.753434   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:54.893443   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:54.957363   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:55.123225   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:55.253782   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:55.392402   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:55.457527   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:55.622504   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:55.753670   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:55.892342   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:55.957326   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:56.122372   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:56.252880   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:56.392624   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:56.456416   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:56.623083   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:56.753593   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:56.892333   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:56.956778   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:57.123401   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:57.253232   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:57.392452   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:57.456919   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:57.622274   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:57.754057   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:57.892778   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:57.956179   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:58.122700   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:58.255185   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:58.393496   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:58.457436   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:58.622512   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:58.752938   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:58.892633   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:58.956487   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:59.123165   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:59.254326   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:59.393059   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:59.456945   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:59.622595   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:59.753811   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:59.893062   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:59.956835   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:00.122985   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:00.253953   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:00.392829   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:00.456727   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:00.622180   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:00.753969   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:00.892563   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:00.956982   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:01.121991   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:01.253685   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:01.392295   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:01.457630   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:01.625363   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:01.754321   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:01.892615   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:01.957166   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:02.123493   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:02.254245   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:02.393498   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:02.457674   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:02.623726   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:02.753635   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:02.892751   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:02.956745   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:03.159506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:03.254213   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:03.393106   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:03.457240   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:03.621912   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:03.753746   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:03.892480   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:03.956428   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:04.123076   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:04.253789   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:04.393118   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:04.457202   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:04.622773   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:04.753548   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:04.915651   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:04.956394   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:05.146913   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:05.253827   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:05.392752   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:05.456536   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:05.622634   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:05.756422   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:05.892959   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:05.956690   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:06.123197   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:06.253633   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:06.392458   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:06.493452   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:06.623774   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:06.753271   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:06.892831   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:06.956800   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:07.123348   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:07.254912   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:07.393692   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:07.457170   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:07.622211   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:07.753829   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:07.892881   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:07.957043   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:08.122175   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:08.254320   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:08.392630   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:08.456013   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:08.622103   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:08.753349   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:08.893830   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:08.995588   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:09.122645   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:09.253267   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:09.393548   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:09.457248   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:09.622532   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:09.753661   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:09.892434   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:09.956824   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:10.122995   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:10.253811   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:10.392345   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:10.457376   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:10.622155   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:10.754084   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:10.892884   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:10.956573   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:11.124633   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:11.253599   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:11.392448   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:11.457164   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:11.622073   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:11.755324   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:11.893241   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:11.956565   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:12.122307   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:12.253856   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:12.392115   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:12.456729   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:12.622677   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:12.754155   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:12.893478   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:12.957238   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:13.122625   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:13.253451   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:13.393328   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:13.456973   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:13.621691   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:13.832052   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:13.892314   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:13.989385   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:14.121949   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:14.253952   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:14.393860   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:14.457918   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:14.624881   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:14.754645   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:14.893867   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:14.956713   10343 kapi.go:107] duration metric: took 1m4.503265783s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 04:27:15.124160   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:15.254352   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:15.392743   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:15.622840   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:15.754060   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:15.892941   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:16.122671   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:16.253146   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:16.392666   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:16.623308   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:16.754214   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:16.892894   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:17.123676   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:17.253734   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:17.391983   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:17.622207   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:17.754709   10343 kapi.go:107] duration metric: took 1m7.004500189s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 04:27:17.893903   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:18.122168   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:18.393515   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:18.623101   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:18.940144   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:19.156294   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:19.392720   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:19.623174   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:19.892944   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:20.121876   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:20.392740   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:20.623198   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:20.893138   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:21.121950   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:21.393149   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:21.622238   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:21.892820   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:22.123123   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:22.392951   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:22.652382   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:22.893000   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:23.121896   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:23.392639   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:23.623476   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:23.893238   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:24.121948   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:24.392982   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:24.622070   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:24.893061   10343 kapi.go:107] duration metric: took 1m7.503614245s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 04:27:24.894422   10343 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-920118 cluster.
	I1216 04:27:24.895900   10343 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 04:27:24.897326   10343 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 04:27:25.123598   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:25.622998   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:26.122272   10343 kapi.go:107] duration metric: took 1m15.003326955s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 04:27:26.123947   10343 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, cloud-spanner, default-storageclass, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1216 04:27:26.125124   10343 addons.go:530] duration metric: took 1m17.029513661s for enable addons: enabled=[nvidia-device-plugin registry-creds cloud-spanner default-storageclass amd-gpu-device-plugin inspektor-gadget ingress-dns metrics-server storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1216 04:27:26.125167   10343 start.go:247] waiting for cluster config update ...
	I1216 04:27:26.125194   10343 start.go:256] writing updated cluster config ...
	I1216 04:27:26.125522   10343 ssh_runner.go:195] Run: rm -f paused
	I1216 04:27:26.129322   10343 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:27:26.132085   10343 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-znpvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.135726   10343 pod_ready.go:94] pod "coredns-66bc5c9577-znpvj" is "Ready"
	I1216 04:27:26.135744   10343 pod_ready.go:86] duration metric: took 3.639958ms for pod "coredns-66bc5c9577-znpvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.137467   10343 pod_ready.go:83] waiting for pod "etcd-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.140670   10343 pod_ready.go:94] pod "etcd-addons-920118" is "Ready"
	I1216 04:27:26.140689   10343 pod_ready.go:86] duration metric: took 3.20403ms for pod "etcd-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.142233   10343 pod_ready.go:83] waiting for pod "kube-apiserver-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.145341   10343 pod_ready.go:94] pod "kube-apiserver-addons-920118" is "Ready"
	I1216 04:27:26.145357   10343 pod_ready.go:86] duration metric: took 3.103983ms for pod "kube-apiserver-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.146846   10343 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.533741   10343 pod_ready.go:94] pod "kube-controller-manager-addons-920118" is "Ready"
	I1216 04:27:26.533782   10343 pod_ready.go:86] duration metric: took 386.910026ms for pod "kube-controller-manager-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.732980   10343 pod_ready.go:83] waiting for pod "kube-proxy-8wpnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.132461   10343 pod_ready.go:94] pod "kube-proxy-8wpnk" is "Ready"
	I1216 04:27:27.132495   10343 pod_ready.go:86] duration metric: took 399.487575ms for pod "kube-proxy-8wpnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.333584   10343 pod_ready.go:83] waiting for pod "kube-scheduler-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.733163   10343 pod_ready.go:94] pod "kube-scheduler-addons-920118" is "Ready"
	I1216 04:27:27.733198   10343 pod_ready.go:86] duration metric: took 399.591784ms for pod "kube-scheduler-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.733214   10343 pod_ready.go:40] duration metric: took 1.603864645s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:27:27.776559   10343 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 04:27:27.778482   10343 out.go:179] * Done! kubectl is now configured to use "addons-920118" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.922272123Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-5hcvr/POD" id=18b0d5dc-9758-4e90-b45c-0022d7f53618 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.922366307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.929159461Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5hcvr Namespace:default ID:06a3d349db76802a80311335c8960c3706232546a74753634e31829401d6f7c8 UID:1deb119e-9154-43b4-87f7-01e822560871 NetNS:/var/run/netns/aed1a0f0-7d3d-4ad3-8df5-1cb53321c298 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051f5e8}] Aliases:map[]}"
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.929187716Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-5hcvr to CNI network \"kindnet\" (type=ptp)"
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.939760247Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5hcvr Namespace:default ID:06a3d349db76802a80311335c8960c3706232546a74753634e31829401d6f7c8 UID:1deb119e-9154-43b4-87f7-01e822560871 NetNS:/var/run/netns/aed1a0f0-7d3d-4ad3-8df5-1cb53321c298 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051f5e8}] Aliases:map[]}"
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.939904908Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-5hcvr for CNI network kindnet (type=ptp)"
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.940667061Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.94144359Z" level=info msg="Ran pod sandbox 06a3d349db76802a80311335c8960c3706232546a74753634e31829401d6f7c8 with infra container: default/hello-world-app-5d498dc89-5hcvr/POD" id=18b0d5dc-9758-4e90-b45c-0022d7f53618 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.9426015Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7b810855-0b7b-4dfa-8b01-95547e89368c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.942758134Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=7b810855-0b7b-4dfa-8b01-95547e89368c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.94280574Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=7b810855-0b7b-4dfa-8b01-95547e89368c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.943418261Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=4d8ed78a-1933-45b2-aa32-647afddfc8a9 name=/runtime.v1.ImageService/PullImage
	Dec 16 04:30:04 addons-920118 crio[777]: time="2025-12-16T04:30:04.948679124Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.347129975Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=4d8ed78a-1933-45b2-aa32-647afddfc8a9 name=/runtime.v1.ImageService/PullImage
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.347891035Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f1efd87f-7fc8-4ac8-b453-9379008f0360 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.349573041Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=90cb088f-a7f9-48ef-8443-fae1df6b20c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.353459685Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-5hcvr/hello-world-app" id=74d15806-0116-4e00-a4d8-bdb16f50d672 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.35358135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.358808937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.358964637Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/14fb5fe3465e94d5bcf6a97461ed30e3fb0d62993a30886a94412c55822a8823/merged/etc/passwd: no such file or directory"
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.358987288Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/14fb5fe3465e94d5bcf6a97461ed30e3fb0d62993a30886a94412c55822a8823/merged/etc/group: no such file or directory"
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.359241849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.389143992Z" level=info msg="Created container c683fae1d997a3a71733173c56706a26892153b80be8e16b995e04cbf130786c: default/hello-world-app-5d498dc89-5hcvr/hello-world-app" id=74d15806-0116-4e00-a4d8-bdb16f50d672 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.389742777Z" level=info msg="Starting container: c683fae1d997a3a71733173c56706a26892153b80be8e16b995e04cbf130786c" id=68cc4f8b-d374-4294-8e1b-72951e7f89fd name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 04:30:05 addons-920118 crio[777]: time="2025-12-16T04:30:05.391846598Z" level=info msg="Started container" PID=9487 containerID=c683fae1d997a3a71733173c56706a26892153b80be8e16b995e04cbf130786c description=default/hello-world-app-5d498dc89-5hcvr/hello-world-app id=68cc4f8b-d374-4294-8e1b-72951e7f89fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=06a3d349db76802a80311335c8960c3706232546a74753634e31829401d6f7c8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	c683fae1d997a       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   06a3d349db768       hello-world-app-5d498dc89-5hcvr             default
	52cafa55de02b       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   0607029ef5569       registry-creds-764b6fb674-rblqj             kube-system
	083d27c6550a2       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   4f96186e62bc1       nginx                                       default
	de66658adcaed       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   307dcc9e89198       busybox                                     default
	7003e956af0ad       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	7155e643211dd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   f7ecbaf8fd130       gcp-auth-78565c9fb4-nb8d6                   gcp-auth
	8aff65da11b97       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	020f3f05b4049       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	83f26bc5ed030       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	bb8f922f85ca6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   5509c148fc0bc       gadget-76954                                gadget
	086cde64476bc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	ac8adb18f5d79       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   1c925d2e74753       ingress-nginx-controller-85d4c799dd-zqp9n   ingress-nginx
	630c8b3279b37       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   ccc20420d744f       registry-proxy-sh6dp                        kube-system
	3c77fede48641       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   bfa53a42fbc08       amd-gpu-device-plugin-kffg9                 kube-system
	89e5c7bfccb0e       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             2 minutes ago            Exited              patch                                    2                   8421a491089dc       ingress-nginx-admission-patch-wgjhv         ingress-nginx
	e71caaec714d2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	b9a013f9f0f76       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago            Running             csi-resizer                              0                   ed31ee48618f9       csi-hostpath-resizer-0                      kube-system
	42eda45aa69e5       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   74fcbb01d0d70       nvidia-device-plugin-daemonset-82xlt        kube-system
	65be2e67b7994       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   a84a696c37162       ingress-nginx-admission-create-dhdgp        ingress-nginx
	e4e430d943bdb       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   f58ae28d123e7       kube-ingress-dns-minikube                   kube-system
	1487414fac27e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   31d52390e7592       snapshot-controller-7d9fbc56b8-xnx2s        kube-system
	1aca72e751ef6       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   12d003ec00240       metrics-server-85b7d694d7-49lmv             kube-system
	7bbeac49d29ea       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   2325dd9007fc9       registry-6b586f9694-76pzp                   kube-system
	3ba35a512da89       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   acb5a720f68d4       csi-hostpath-attacher-0                     kube-system
	4176f3267a36c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   67be75d6e474c       snapshot-controller-7d9fbc56b8-7jwq5        kube-system
	45baa7e28fe04       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   3ce5fb735f966       local-path-provisioner-648f6765c9-m6ntg     local-path-storage
	c54e6bce45489       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   8555cbb7a3371       yakd-dashboard-5ff678cb9-phnzx              yakd-dashboard
	2d841e7838e4f       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   1c16a2385a72e       cloud-spanner-emulator-5bdddb765-p2sxn      default
	fe8ccf3878016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   ac7472e34770a       storage-provisioner                         kube-system
	157d004cac3b9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   04d847ef7710b       coredns-66bc5c9577-znpvj                    kube-system
	2b8f11e422be8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   50c4d0f12aacb       kindnet-grqmf                               kube-system
	20fa3ad645ed1       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago            Running             kube-proxy                               0                   1ed495a721103       kube-proxy-8wpnk                            kube-system
	c71b691cde317       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   9c98c42e00b7f       kube-scheduler-addons-920118                kube-system
	32eff47c7448a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   736679ee4953e       kube-apiserver-addons-920118                kube-system
	0cdc3c36bf12c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   64a56643ef1f8       kube-controller-manager-addons-920118       kube-system
	6973498efaf26       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   f0a587b7070a2       etcd-addons-920118                          kube-system
	
	
	==> coredns [157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e] <==
	[INFO] 10.244.0.22:47166 - 58683 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143761s
	[INFO] 10.244.0.22:50862 - 13086 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006321971s
	[INFO] 10.244.0.22:48096 - 47704 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007139357s
	[INFO] 10.244.0.22:48913 - 17181 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004469334s
	[INFO] 10.244.0.22:36718 - 43958 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008461629s
	[INFO] 10.244.0.22:34871 - 53772 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004295477s
	[INFO] 10.244.0.22:53396 - 59322 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005290741s
	[INFO] 10.244.0.22:50314 - 57704 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00105588s
	[INFO] 10.244.0.22:38572 - 8896 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001274948s
	[INFO] 10.244.0.25:45786 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00024386s
	[INFO] 10.244.0.25:33906 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140893s
	[INFO] 10.244.0.28:42583 - 19034 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000219198s
	[INFO] 10.244.0.28:36221 - 57116 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000290928s
	[INFO] 10.244.0.28:41996 - 48583 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000126202s
	[INFO] 10.244.0.28:51748 - 14440 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00019939s
	[INFO] 10.244.0.28:60212 - 34683 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000119352s
	[INFO] 10.244.0.28:37144 - 2348 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000151017s
	[INFO] 10.244.0.28:37551 - 52628 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.007565905s
	[INFO] 10.244.0.28:33531 - 29485 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.009726089s
	[INFO] 10.244.0.28:40501 - 64072 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006302526s
	[INFO] 10.244.0.28:56312 - 2932 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00704348s
	[INFO] 10.244.0.28:58243 - 59642 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005367012s
	[INFO] 10.244.0.28:43104 - 5553 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006410954s
	[INFO] 10.244.0.28:38629 - 45337 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001668898s
	[INFO] 10.244.0.28:45956 - 18782 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001829936s
	
	
	==> describe nodes <==
	Name:               addons-920118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-920118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=addons-920118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T04_26_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-920118
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-920118"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 04:26:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-920118
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 04:29:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 04:29:57 +0000   Tue, 16 Dec 2025 04:25:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 04:29:57 +0000   Tue, 16 Dec 2025 04:25:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 04:29:57 +0000   Tue, 16 Dec 2025 04:25:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 04:29:57 +0000   Tue, 16 Dec 2025 04:26:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-920118
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                1aa1cdd3-ea31-4f68-9e9e-dbc03ee47c3c
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  default                     cloud-spanner-emulator-5bdddb765-p2sxn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  default                     hello-world-app-5d498dc89-5hcvr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-76954                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  gcp-auth                    gcp-auth-78565c9fb4-nb8d6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-zqp9n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m56s
	  kube-system                 amd-gpu-device-plugin-kffg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 coredns-66bc5c9577-znpvj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m57s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 csi-hostpathplugin-sm7qv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 etcd-addons-920118                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m3s
	  kube-system                 kindnet-grqmf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m57s
	  kube-system                 kube-apiserver-addons-920118                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-controller-manager-addons-920118        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-proxy-8wpnk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kube-scheduler-addons-920118                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 metrics-server-85b7d694d7-49lmv              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m56s
	  kube-system                 nvidia-device-plugin-daemonset-82xlt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 registry-6b586f9694-76pzp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 registry-creds-764b6fb674-rblqj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 registry-proxy-sh6dp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-7jwq5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 snapshot-controller-7d9fbc56b8-xnx2s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  local-path-storage          local-path-provisioner-648f6765c9-m6ntg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-phnzx               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m55s  kube-proxy       
	  Normal  Starting                 4m3s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s   kubelet          Node addons-920118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s   kubelet          Node addons-920118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s   kubelet          Node addons-920118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m58s  node-controller  Node addons-920118 event: Registered Node addons-920118 in Controller
	  Normal  NodeReady                3m16s  kubelet          Node addons-920118 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391] <==
	{"level":"warn","ts":"2025-12-16T04:26:00.489553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.497039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.504795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.512106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.520573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.530684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.537076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.545921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.552575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.560689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.567366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.573666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.579746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.600465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.607972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.614759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.657940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:11.530595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:11.537749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.092428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.099300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.109692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.116645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58722","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T04:27:13.830310Z","caller":"traceutil/trace.go:172","msg":"trace[317146620] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"145.290515ms","start":"2025-12-16T04:27:13.684997Z","end":"2025-12-16T04:27:13.830287Z","steps":["trace[317146620] 'process raft request'  (duration: 71.106735ms)","trace[317146620] 'compare'  (duration: 74.086135ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:27:22.650076Z","caller":"traceutil/trace.go:172","msg":"trace[1727911472] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"102.929324ms","start":"2025-12-16T04:27:22.547120Z","end":"2025-12-16T04:27:22.650049Z","steps":["trace[1727911472] 'process raft request'  (duration: 62.883528ms)","trace[1727911472] 'compare'  (duration: 39.896633ms)"],"step_count":2}
	
	
	==> gcp-auth [7155e643211ddd152b4f57f80440213bcc24fb8523defe39dd525466f92bbde2] <==
	2025/12/16 04:27:24 GCP Auth Webhook started!
	2025/12/16 04:27:28 Ready to marshal response ...
	2025/12/16 04:27:28 Ready to write response ...
	2025/12/16 04:27:28 Ready to marshal response ...
	2025/12/16 04:27:28 Ready to write response ...
	2025/12/16 04:27:28 Ready to marshal response ...
	2025/12/16 04:27:28 Ready to write response ...
	2025/12/16 04:27:42 Ready to marshal response ...
	2025/12/16 04:27:42 Ready to write response ...
	2025/12/16 04:27:47 Ready to marshal response ...
	2025/12/16 04:27:47 Ready to write response ...
	2025/12/16 04:27:48 Ready to marshal response ...
	2025/12/16 04:27:48 Ready to write response ...
	2025/12/16 04:27:48 Ready to marshal response ...
	2025/12/16 04:27:48 Ready to write response ...
	2025/12/16 04:27:55 Ready to marshal response ...
	2025/12/16 04:27:55 Ready to write response ...
	2025/12/16 04:28:00 Ready to marshal response ...
	2025/12/16 04:28:00 Ready to write response ...
	2025/12/16 04:28:12 Ready to marshal response ...
	2025/12/16 04:28:12 Ready to write response ...
	2025/12/16 04:30:04 Ready to marshal response ...
	2025/12/16 04:30:04 Ready to write response ...
	
	
	==> kernel <==
	 04:30:06 up 12 min,  0 user,  load average: 0.31, 0.76, 0.39
	Linux addons-920118 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a] <==
	I1216 04:28:00.187311       1 main.go:301] handling current node
	I1216 04:28:10.187684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:28:10.187710       1 main.go:301] handling current node
	I1216 04:28:20.187546       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:28:20.187577       1 main.go:301] handling current node
	I1216 04:28:30.190084       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:28:30.190124       1 main.go:301] handling current node
	I1216 04:28:40.195114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:28:40.195142       1 main.go:301] handling current node
	I1216 04:28:50.192965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:28:50.192994       1 main.go:301] handling current node
	I1216 04:29:00.187287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:29:00.187316       1 main.go:301] handling current node
	I1216 04:29:10.187453       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:29:10.187490       1 main.go:301] handling current node
	I1216 04:29:20.190935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:29:20.190964       1 main.go:301] handling current node
	I1216 04:29:30.189077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:29:30.189107       1 main.go:301] handling current node
	I1216 04:29:40.188312       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:29:40.188358       1 main.go:301] handling current node
	I1216 04:29:50.188364       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:29:50.188395       1 main.go:301] handling current node
	I1216 04:30:00.188449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:30:00.188514       1 main.go:301] handling current node
	
	
	==> kube-apiserver [32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9] <==
	W1216 04:26:38.116584       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 04:26:50.486472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.486510       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:26:50.486549       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.486596       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:26:50.504706       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.504741       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:26:50.506406       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.506444       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:27:02.644088       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:27:02.644165       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1216 04:27:02.644206       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	E1216 04:27:02.646059       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	E1216 04:27:02.651688       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	E1216 04:27:02.672401       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	I1216 04:27:02.748109       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 04:27:36.782867       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36302: use of closed network connection
	E1216 04:27:36.923871       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36336: use of closed network connection
	I1216 04:27:42.666571       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 04:27:42.860230       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.244.191"}
	I1216 04:28:08.981926       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 04:30:04.686385       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.252.35"}
	
	
	==> kube-controller-manager [0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093] <==
	I1216 04:26:08.073376       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 04:26:08.073455       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 04:26:08.073496       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 04:26:08.073521       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 04:26:08.073534       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 04:26:08.073543       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 04:26:08.073518       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 04:26:08.073519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 04:26:08.073524       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 04:26:08.073461       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 04:26:08.074836       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 04:26:08.074863       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 04:26:08.075376       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 04:26:08.081114       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 04:26:08.082238       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:26:08.092515       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1216 04:26:10.337514       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 04:26:38.086987       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 04:26:38.087134       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 04:26:38.087167       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 04:26:38.099221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 04:26:38.102486       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 04:26:38.187410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:26:38.203655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:26:53.028104       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889] <==
	I1216 04:26:10.332044       1 server_linux.go:53] "Using iptables proxy"
	I1216 04:26:10.440595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 04:26:10.541322       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 04:26:10.541355       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 04:26:10.541437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 04:26:10.562530       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 04:26:10.562659       1 server_linux.go:132] "Using iptables Proxier"
	I1216 04:26:10.569808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 04:26:10.575418       1 server.go:527] "Version info" version="v1.34.2"
	I1216 04:26:10.575462       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 04:26:10.578650       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 04:26:10.578849       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 04:26:10.578649       1 config.go:200] "Starting service config controller"
	I1216 04:26:10.578686       1 config.go:106] "Starting endpoint slice config controller"
	I1216 04:26:10.579195       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 04:26:10.579190       1 config.go:309] "Starting node config controller"
	I1216 04:26:10.579301       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 04:26:10.579332       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 04:26:10.579300       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 04:26:10.679488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 04:26:10.679503       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 04:26:10.679522       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1] <==
	I1216 04:26:01.232343       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 04:26:01.234856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 04:26:01.235093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 04:26:01.238007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 04:26:01.238260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 04:26:01.238159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 04:26:01.238238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:26:01.238359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:26:01.238414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:26:01.238457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:26:01.238462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:26:01.238158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:26:01.238374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:26:01.239105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 04:26:01.239106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:26:01.239214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:26:01.239218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 04:26:01.239319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:26:01.239351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:26:01.239387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:26:02.095760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:26:02.127669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:26:02.161042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:26:02.205957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1216 04:26:02.730927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.682353    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwtd\" (UniqueName: \"kubernetes.io/projected/a9106c10-d799-4a5b-acd1-350c69cc976a-kube-api-access-zbwtd\") pod \"a9106c10-d799-4a5b-acd1-350c69cc976a\" (UID: \"a9106c10-d799-4a5b-acd1-350c69cc976a\") "
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.682407    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a9106c10-d799-4a5b-acd1-350c69cc976a-gcp-creds\") pod \"a9106c10-d799-4a5b-acd1-350c69cc976a\" (UID: \"a9106c10-d799-4a5b-acd1-350c69cc976a\") "
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.682490    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a1527b82-da37-11f0-a2e0-2a2fd32512d2\") pod \"a9106c10-d799-4a5b-acd1-350c69cc976a\" (UID: \"a9106c10-d799-4a5b-acd1-350c69cc976a\") "
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.682562    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9106c10-d799-4a5b-acd1-350c69cc976a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a9106c10-d799-4a5b-acd1-350c69cc976a" (UID: "a9106c10-d799-4a5b-acd1-350c69cc976a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.682644    1286 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a9106c10-d799-4a5b-acd1-350c69cc976a-gcp-creds\") on node \"addons-920118\" DevicePath \"\""
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.684616    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9106c10-d799-4a5b-acd1-350c69cc976a-kube-api-access-zbwtd" (OuterVolumeSpecName: "kube-api-access-zbwtd") pod "a9106c10-d799-4a5b-acd1-350c69cc976a" (UID: "a9106c10-d799-4a5b-acd1-350c69cc976a"). InnerVolumeSpecName "kube-api-access-zbwtd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.685609    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^a1527b82-da37-11f0-a2e0-2a2fd32512d2" (OuterVolumeSpecName: "task-pv-storage") pod "a9106c10-d799-4a5b-acd1-350c69cc976a" (UID: "a9106c10-d799-4a5b-acd1-350c69cc976a"). InnerVolumeSpecName "pvc-f6dd1bf7-177e-41b1-9d8c-f5a4f2389313". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.783607    1286 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f6dd1bf7-177e-41b1-9d8c-f5a4f2389313\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a1527b82-da37-11f0-a2e0-2a2fd32512d2\") on node \"addons-920118\" "
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.783654    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbwtd\" (UniqueName: \"kubernetes.io/projected/a9106c10-d799-4a5b-acd1-350c69cc976a-kube-api-access-zbwtd\") on node \"addons-920118\" DevicePath \"\""
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.788493    1286 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-f6dd1bf7-177e-41b1-9d8c-f5a4f2389313" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^a1527b82-da37-11f0-a2e0-2a2fd32512d2") on node "addons-920118"
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.884589    1286 reconciler_common.go:299] "Volume detached for volume \"pvc-f6dd1bf7-177e-41b1-9d8c-f5a4f2389313\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a1527b82-da37-11f0-a2e0-2a2fd32512d2\") on node \"addons-920118\" DevicePath \"\""
	Dec 16 04:28:18 addons-920118 kubelet[1286]: I1216 04:28:18.994090    1286 scope.go:117] "RemoveContainer" containerID="4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d"
	Dec 16 04:28:19 addons-920118 kubelet[1286]: I1216 04:28:19.003751    1286 scope.go:117] "RemoveContainer" containerID="4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d"
	Dec 16 04:28:19 addons-920118 kubelet[1286]: E1216 04:28:19.004128    1286 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d\": container with ID starting with 4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d not found: ID does not exist" containerID="4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d"
	Dec 16 04:28:19 addons-920118 kubelet[1286]: I1216 04:28:19.004171    1286 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d"} err="failed to get container status \"4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d\": rpc error: code = NotFound desc = could not find container \"4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d\": container with ID starting with 4bf651d2a3e8f08673d6e8cd9d06d2fe5481ac38f7df8222a0be5915ae3d066d not found: ID does not exist"
	Dec 16 04:28:19 addons-920118 kubelet[1286]: I1216 04:28:19.433653    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9106c10-d799-4a5b-acd1-350c69cc976a" path="/var/lib/kubelet/pods/a9106c10-d799-4a5b-acd1-350c69cc976a/volumes"
	Dec 16 04:28:21 addons-920118 kubelet[1286]: I1216 04:28:21.432086    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-sh6dp" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:28:26 addons-920118 kubelet[1286]: I1216 04:28:26.431756    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kffg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:28:29 addons-920118 kubelet[1286]: I1216 04:28:29.431526    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-82xlt" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:29:33 addons-920118 kubelet[1286]: I1216 04:29:33.431951    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kffg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:29:43 addons-920118 kubelet[1286]: I1216 04:29:43.434205    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-sh6dp" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:29:48 addons-920118 kubelet[1286]: I1216 04:29:48.431201    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-82xlt" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:30:04 addons-920118 kubelet[1286]: I1216 04:30:04.718478    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4bh\" (UniqueName: \"kubernetes.io/projected/1deb119e-9154-43b4-87f7-01e822560871-kube-api-access-8b4bh\") pod \"hello-world-app-5d498dc89-5hcvr\" (UID: \"1deb119e-9154-43b4-87f7-01e822560871\") " pod="default/hello-world-app-5d498dc89-5hcvr"
	Dec 16 04:30:04 addons-920118 kubelet[1286]: I1216 04:30:04.718523    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1deb119e-9154-43b4-87f7-01e822560871-gcp-creds\") pod \"hello-world-app-5d498dc89-5hcvr\" (UID: \"1deb119e-9154-43b4-87f7-01e822560871\") " pod="default/hello-world-app-5d498dc89-5hcvr"
	Dec 16 04:30:06 addons-920118 kubelet[1286]: I1216 04:30:06.379892    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-5hcvr" podStartSLOduration=1.974003916 podStartE2EDuration="2.379870877s" podCreationTimestamp="2025-12-16 04:30:04 +0000 UTC" firstStartedPulling="2025-12-16 04:30:04.943100133 +0000 UTC m=+241.591356939" lastFinishedPulling="2025-12-16 04:30:05.348967093 +0000 UTC m=+241.997223900" observedRunningTime="2025-12-16 04:30:06.378435447 +0000 UTC m=+243.026692274" watchObservedRunningTime="2025-12-16 04:30:06.379870877 +0000 UTC m=+243.028127705"
	
	
	==> storage-provisioner [fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021] <==
	W1216 04:29:41.657919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:43.660802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:43.665676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:45.668453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:45.672038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:47.674775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:47.678431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:49.681265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:49.684739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:51.688181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:51.691591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:53.694521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:53.697848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:55.700808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:55.704124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:57.706883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:57.711136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:59.714040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:29:59.718204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:30:01.721375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:30:01.724915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:30:03.727772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:30:03.731300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:30:05.734182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:30:05.738944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-920118 -n addons-920118
helpers_test.go:270: (dbg) Run:  kubectl --context addons-920118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-920118 describe pod ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-920118 describe pod ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv: exit status 1 (56.672441ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dhdgp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wgjhv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-920118 describe pod ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (236.106803ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:30:07.122612   24593 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:30:07.122773   24593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:30:07.122783   24593 out.go:374] Setting ErrFile to fd 2...
	I1216 04:30:07.122788   24593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:30:07.122979   24593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:30:07.123258   24593 mustload.go:66] Loading cluster: addons-920118
	I1216 04:30:07.123551   24593 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:30:07.123567   24593 addons.go:622] checking whether the cluster is paused
	I1216 04:30:07.123645   24593 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:30:07.123656   24593 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:30:07.124013   24593 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:30:07.142074   24593 ssh_runner.go:195] Run: systemctl --version
	I1216 04:30:07.142130   24593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:30:07.160127   24593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:30:07.251619   24593 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:30:07.251688   24593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:30:07.280243   24593 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:30:07.280266   24593 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:30:07.280270   24593 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:30:07.280274   24593 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:30:07.280276   24593 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:30:07.280279   24593 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:30:07.280282   24593 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:30:07.280285   24593 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:30:07.280288   24593 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:30:07.280294   24593 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:30:07.280296   24593 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:30:07.280299   24593 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:30:07.280301   24593 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:30:07.280304   24593 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:30:07.280307   24593 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:30:07.280312   24593 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:30:07.280315   24593 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:30:07.280319   24593 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:30:07.280322   24593 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:30:07.280324   24593 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:30:07.280335   24593 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:30:07.280338   24593 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:30:07.280341   24593 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:30:07.280343   24593 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:30:07.280346   24593 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:30:07.280349   24593 cri.go:89] found id: ""
	I1216 04:30:07.280385   24593 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:30:07.295090   24593 out.go:203] 
	W1216 04:30:07.296342   24593 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:30:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:30:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:30:07.296366   24593 out.go:285] * 
	* 
	W1216 04:30:07.299281   24593 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:30:07.300542   24593 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable ingress --alsologtostderr -v=1: exit status 11 (234.702271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:30:07.359051   24654 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:30:07.359207   24654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:30:07.359218   24654 out.go:374] Setting ErrFile to fd 2...
	I1216 04:30:07.359222   24654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:30:07.359417   24654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:30:07.359666   24654 mustload.go:66] Loading cluster: addons-920118
	I1216 04:30:07.359993   24654 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:30:07.360010   24654 addons.go:622] checking whether the cluster is paused
	I1216 04:30:07.360132   24654 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:30:07.360147   24654 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:30:07.360562   24654 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:30:07.379430   24654 ssh_runner.go:195] Run: systemctl --version
	I1216 04:30:07.379485   24654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:30:07.395988   24654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:30:07.487473   24654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:30:07.487546   24654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:30:07.515992   24654 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:30:07.516014   24654 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:30:07.516040   24654 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:30:07.516046   24654 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:30:07.516052   24654 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:30:07.516057   24654 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:30:07.516061   24654 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:30:07.516065   24654 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:30:07.516070   24654 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:30:07.516077   24654 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:30:07.516082   24654 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:30:07.516092   24654 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:30:07.516098   24654 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:30:07.516114   24654 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:30:07.516121   24654 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:30:07.516132   24654 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:30:07.516139   24654 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:30:07.516143   24654 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:30:07.516146   24654 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:30:07.516148   24654 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:30:07.516153   24654 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:30:07.516156   24654 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:30:07.516160   24654 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:30:07.516164   24654 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:30:07.516168   24654 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:30:07.516173   24654 cri.go:89] found id: ""
	I1216 04:30:07.516220   24654 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:30:07.530161   24654 out.go:203] 
	W1216 04:30:07.531415   24654 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:30:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:30:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:30:07.531435   24654 out.go:285] * 
	* 
	W1216 04:30:07.534354   24654 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:30:07.535574   24654 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-76954" [1bd34871-dcf9-4125-a957-6dd090421af9] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002756607s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (234.235258ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:47.541091   20498 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:47.541234   20498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:47.541243   20498 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:47.541247   20498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:47.541443   20498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:47.541686   20498 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:47.542004   20498 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:47.542032   20498 addons.go:622] checking whether the cluster is paused
	I1216 04:27:47.542115   20498 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:47.542126   20498 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:47.542473   20498 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:47.561197   20498 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:47.561264   20498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:47.578753   20498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:47.669872   20498 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:47.669994   20498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:47.698480   20498 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:47.698503   20498 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:47.698507   20498 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:47.698511   20498 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:47.698514   20498 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:47.698518   20498 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:47.698521   20498 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:47.698524   20498 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:47.698526   20498 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:47.698539   20498 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:47.698542   20498 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:47.698545   20498 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:47.698547   20498 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:47.698550   20498 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:47.698553   20498 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:47.698561   20498 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:47.698567   20498 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:47.698572   20498 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:47.698575   20498 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:47.698578   20498 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:47.698582   20498 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:47.698588   20498 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:47.698590   20498 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:47.698593   20498 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:47.698599   20498 cri.go:89] found id: ""
	I1216 04:27:47.698637   20498 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:47.712079   20498 out.go:203] 
	W1216 04:27:47.713334   20498 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:47.713354   20498 out.go:285] * 
	* 
	W1216 04:27:47.716200   20498 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:47.717402   20498 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.911717ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002269605s
addons_test.go:465: (dbg) Run:  kubectl --context addons-920118 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (243.638324ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:42.296536   19476 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:42.296662   19476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:42.296673   19476 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:42.296679   19476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:42.296895   19476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:42.297183   19476 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:42.297604   19476 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:42.297628   19476 addons.go:622] checking whether the cluster is paused
	I1216 04:27:42.297753   19476 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:42.297768   19476 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:42.298226   19476 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:42.315713   19476 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:42.315765   19476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:42.333680   19476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:42.425592   19476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:42.425674   19476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:42.457166   19476 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:42.457205   19476 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:42.457217   19476 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:42.457222   19476 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:42.457226   19476 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:42.457237   19476 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:42.457242   19476 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:42.457245   19476 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:42.457249   19476 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:42.457263   19476 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:42.457270   19476 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:42.457274   19476 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:42.457278   19476 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:42.457282   19476 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:42.457286   19476 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:42.457301   19476 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:42.457312   19476 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:42.457319   19476 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:42.457323   19476 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:42.457327   19476 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:42.457334   19476 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:42.457338   19476 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:42.457342   19476 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:42.457346   19476 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:42.457350   19476 cri.go:89] found id: ""
	I1216 04:27:42.457424   19476 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:42.472044   19476 out.go:203] 
	W1216 04:27:42.473465   19476 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:42.473495   19476 out.go:285] * 
	* 
	W1216 04:27:42.477056   19476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:42.478141   19476 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 04:27:39.637180    8592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 04:27:39.640203    8592 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 04:27:39.640235    8592 kapi.go:107] duration metric: took 3.068555ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.082097ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-920118 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-920118 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [75fadaa1-5d7a-4834-95ba-27cd34ebd20c] Pending
helpers_test.go:353: "task-pv-pod" [75fadaa1-5d7a-4834-95ba-27cd34ebd20c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [75fadaa1-5d7a-4834-95ba-27cd34ebd20c] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00328977s
addons_test.go:574: (dbg) Run:  kubectl --context addons-920118 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-920118 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-920118 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-920118 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-920118 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-920118 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-920118 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [a9106c10-d799-4a5b-acd1-350c69cc976a] Pending
helpers_test.go:353: "task-pv-pod-restore" [a9106c10-d799-4a5b-acd1-350c69cc976a] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003867618s
addons_test.go:616: (dbg) Run:  kubectl --context addons-920118 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-920118 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-920118 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (239.751322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:28:19.384672   22432 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:28:19.384937   22432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:19.384946   22432 out.go:374] Setting ErrFile to fd 2...
	I1216 04:28:19.384952   22432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:19.385182   22432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:28:19.385437   22432 mustload.go:66] Loading cluster: addons-920118
	I1216 04:28:19.385763   22432 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:19.385782   22432 addons.go:622] checking whether the cluster is paused
	I1216 04:28:19.385862   22432 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:19.385873   22432 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:28:19.386260   22432 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:28:19.404079   22432 ssh_runner.go:195] Run: systemctl --version
	I1216 04:28:19.404131   22432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:28:19.422987   22432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:28:19.515874   22432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:28:19.515976   22432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:28:19.545744   22432 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:28:19.545768   22432 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:28:19.545774   22432 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:28:19.545778   22432 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:28:19.545783   22432 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:28:19.545786   22432 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:28:19.545789   22432 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:28:19.545792   22432 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:28:19.545795   22432 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:28:19.545799   22432 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:28:19.545802   22432 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:28:19.545805   22432 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:28:19.545808   22432 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:28:19.545810   22432 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:28:19.545813   22432 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:28:19.545831   22432 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:28:19.545838   22432 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:28:19.545843   22432 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:28:19.545847   22432 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:28:19.545849   22432 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:28:19.545852   22432 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:28:19.545855   22432 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:28:19.545858   22432 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:28:19.545861   22432 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:28:19.545863   22432 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:28:19.545866   22432 cri.go:89] found id: ""
	I1216 04:28:19.545913   22432 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:28:19.559956   22432 out.go:203] 
	W1216 04:28:19.561097   22432 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:28:19.561112   22432 out.go:285] * 
	* 
	W1216 04:28:19.564013   22432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:28:19.565218   22432 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (239.196587ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:28:19.623979   22497 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:28:19.624120   22497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:19.624129   22497 out.go:374] Setting ErrFile to fd 2...
	I1216 04:28:19.624133   22497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:19.624343   22497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:28:19.624610   22497 mustload.go:66] Loading cluster: addons-920118
	I1216 04:28:19.624936   22497 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:19.624955   22497 addons.go:622] checking whether the cluster is paused
	I1216 04:28:19.625055   22497 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:19.625068   22497 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:28:19.625433   22497 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:28:19.642719   22497 ssh_runner.go:195] Run: systemctl --version
	I1216 04:28:19.642772   22497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:28:19.659405   22497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:28:19.750587   22497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:28:19.750663   22497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:28:19.783231   22497 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:28:19.783254   22497 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:28:19.783259   22497 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:28:19.783266   22497 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:28:19.783269   22497 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:28:19.783273   22497 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:28:19.783277   22497 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:28:19.783280   22497 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:28:19.783283   22497 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:28:19.783288   22497 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:28:19.783298   22497 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:28:19.783301   22497 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:28:19.783304   22497 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:28:19.783311   22497 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:28:19.783314   22497 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:28:19.783319   22497 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:28:19.783324   22497 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:28:19.783327   22497 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:28:19.783330   22497 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:28:19.783333   22497 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:28:19.783337   22497 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:28:19.783342   22497 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:28:19.783345   22497 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:28:19.783353   22497 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:28:19.783357   22497 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:28:19.783360   22497 cri.go:89] found id: ""
	I1216 04:28:19.783397   22497 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:28:19.798740   22497 out.go:203] 
	W1216 04:28:19.799989   22497 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:28:19.800013   22497 out.go:285] * 
	* 
	W1216 04:28:19.803270   22497 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:28:19.804513   22497 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-920118 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-920118 --alsologtostderr -v=1: exit status 11 (235.272076ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:37.222264   18585 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:37.222415   18585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:37.222424   18585 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:37.222428   18585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:37.222628   18585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:37.222857   18585 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:37.223176   18585 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:37.223199   18585 addons.go:622] checking whether the cluster is paused
	I1216 04:27:37.223313   18585 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:37.223331   18585 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:37.223740   18585 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:37.241239   18585 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:37.241290   18585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:37.259371   18585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:37.349426   18585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:37.349513   18585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:37.377454   18585 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:37.377478   18585 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:37.377486   18585 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:37.377490   18585 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:37.377495   18585 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:37.377501   18585 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:37.377506   18585 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:37.377511   18585 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:37.377516   18585 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:37.377526   18585 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:37.377534   18585 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:37.377539   18585 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:37.377541   18585 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:37.377544   18585 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:37.377547   18585 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:37.377559   18585 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:37.377565   18585 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:37.377570   18585 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:37.377572   18585 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:37.377575   18585 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:37.377580   18585 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:37.377582   18585 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:37.377585   18585 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:37.377588   18585 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:37.377590   18585 cri.go:89] found id: ""
	I1216 04:27:37.377625   18585 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:37.391221   18585 out.go:203] 
	W1216 04:27:37.392321   18585 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:37.392339   18585 out.go:285] * 
	* 
	W1216 04:27:37.395301   18585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:37.396456   18585 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-920118 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-920118
helpers_test.go:244: (dbg) docker inspect addons-920118:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02",
	        "Created": "2025-12-16T04:25:48.17220649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 10997,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:25:48.206794545Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/hostname",
	        "HostsPath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/hosts",
	        "LogPath": "/var/lib/docker/containers/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02/b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02-json.log",
	        "Name": "/addons-920118",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-920118:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-920118",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b481032a3d513dd211366d9d5de993de818fea19b797e2da253c2774c6de4b02",
	                "LowerDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16783fba5330964cabd30c2581f65c1cbd9c022ace92e3c17286fcca62d5dc9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-920118",
	                "Source": "/var/lib/docker/volumes/addons-920118/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-920118",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-920118",
	                "name.minikube.sigs.k8s.io": "addons-920118",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8dce173bf4f1fcd01cd24763bbac8c8884decb5b2b08191a27e99254536ac19e",
	            "SandboxKey": "/var/run/docker/netns/8dce173bf4f1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-920118": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39d8d0633a2fff8310f5532936b9a77fc9a6621291b9e67d3291508ccb49b4f4",
	                    "EndpointID": "464bb78d2c6befd6118a2610a48962d5812608a15b1195285613f508c650a521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "e2:3a:0d:19:df:4b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-920118",
	                        "b481032a3d51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-920118 -n addons-920118
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-920118 logs -n 25: (1.080541944s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-064794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-064794   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-064794                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-064794   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-159842 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-159842   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-159842                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-159842   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-317050 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-317050   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-317050                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-317050   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-064794                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-064794   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-159842                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-159842   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-317050                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-317050   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ --download-only -p download-docker-190309 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-190309 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ -p download-docker-190309                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-190309 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ --download-only -p binary-mirror-231674 --alsologtostderr --binary-mirror http://127.0.0.1:36059 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-231674   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ -p binary-mirror-231674                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-231674   │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ addons  │ enable dashboard -p addons-920118                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-920118          │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-920118                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-920118          │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ start   │ -p addons-920118 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-920118          │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:27 UTC │
	│ addons  │ addons-920118 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-920118          │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ addons-920118 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-920118          │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-920118 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-920118          │ jenkins │ v1.37.0 │ 16 Dec 25 04:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:24.306358   10343 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:24.306576   10343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:24.306584   10343 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:24.306588   10343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:24.306752   10343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:25:24.307276   10343 out.go:368] Setting JSON to false
	I1216 04:25:24.308046   10343 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":475,"bootTime":1765858649,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:24.308114   10343 start.go:143] virtualization: kvm guest
	I1216 04:25:24.309884   10343 out.go:179] * [addons-920118] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:25:24.311115   10343 notify.go:221] Checking for updates...
	I1216 04:25:24.311143   10343 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:25:24.312224   10343 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:24.313258   10343 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:25:24.314375   10343 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:25:24.315408   10343 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:25:24.316518   10343 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:25:24.317700   10343 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:24.339458   10343 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:25:24.339605   10343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:24.392768   10343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 04:25:24.384035903 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:24.392877   10343 docker.go:319] overlay module found
	I1216 04:25:24.394500   10343 out.go:179] * Using the docker driver based on user configuration
	I1216 04:25:24.395565   10343 start.go:309] selected driver: docker
	I1216 04:25:24.395579   10343 start.go:927] validating driver "docker" against <nil>
	I1216 04:25:24.395588   10343 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:25:24.396245   10343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:24.452141   10343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 04:25:24.443254876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:24.452284   10343 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:24.452483   10343 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:25:24.454014   10343 out.go:179] * Using Docker driver with root privileges
	I1216 04:25:24.455179   10343 cni.go:84] Creating CNI manager for ""
	I1216 04:25:24.455240   10343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:25:24.455251   10343 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:25:24.455314   10343 start.go:353] cluster config:
	{Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 04:25:24.456557   10343 out.go:179] * Starting "addons-920118" primary control-plane node in "addons-920118" cluster
	I1216 04:25:24.457592   10343 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:25:24.458534   10343 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:25:24.459376   10343 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:24.459402   10343 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:24.459407   10343 cache.go:65] Caching tarball of preloaded images
	I1216 04:25:24.459456   10343 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:25:24.459482   10343 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 04:25:24.459490   10343 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:25:24.459790   10343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/config.json ...
	I1216 04:25:24.459815   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/config.json: {Name:mke945c6726d35da1068061ecb7d185de0f97ad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:24.475378   10343 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:25:24.475518   10343 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 04:25:24.475544   10343 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 04:25:24.475552   10343 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 04:25:24.475559   10343 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 04:25:24.475565   10343 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1216 04:25:37.148427   10343 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1216 04:25:37.148477   10343 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:25:37.148544   10343 start.go:360] acquireMachinesLock for addons-920118: {Name:mkb2ed6fe27dda6ced07935a09984425c540259b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:25:37.148658   10343 start.go:364] duration metric: took 90.869µs to acquireMachinesLock for "addons-920118"
	I1216 04:25:37.148688   10343 start.go:93] Provisioning new machine with config: &{Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:25:37.148773   10343 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:25:37.150520   10343 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 04:25:37.150790   10343 start.go:159] libmachine.API.Create for "addons-920118" (driver="docker")
	I1216 04:25:37.150822   10343 client.go:173] LocalClient.Create starting
	I1216 04:25:37.150949   10343 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 04:25:37.195198   10343 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 04:25:37.310407   10343 cli_runner.go:164] Run: docker network inspect addons-920118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:25:37.328600   10343 cli_runner.go:211] docker network inspect addons-920118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:25:37.328674   10343 network_create.go:284] running [docker network inspect addons-920118] to gather additional debugging logs...
	I1216 04:25:37.328699   10343 cli_runner.go:164] Run: docker network inspect addons-920118
	W1216 04:25:37.345527   10343 cli_runner.go:211] docker network inspect addons-920118 returned with exit code 1
	I1216 04:25:37.345554   10343 network_create.go:287] error running [docker network inspect addons-920118]: docker network inspect addons-920118: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-920118 not found
	I1216 04:25:37.345566   10343 network_create.go:289] output of [docker network inspect addons-920118]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-920118 not found
	
	** /stderr **
	I1216 04:25:37.345695   10343 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:25:37.361890   10343 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dad130}
	I1216 04:25:37.361925   10343 network_create.go:124] attempt to create docker network addons-920118 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 04:25:37.361987   10343 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-920118 addons-920118
	I1216 04:25:37.408654   10343 network_create.go:108] docker network addons-920118 192.168.49.0/24 created
	I1216 04:25:37.408683   10343 kic.go:121] calculated static IP "192.168.49.2" for the "addons-920118" container
	I1216 04:25:37.408744   10343 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:25:37.426169   10343 cli_runner.go:164] Run: docker volume create addons-920118 --label name.minikube.sigs.k8s.io=addons-920118 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:25:37.444124   10343 oci.go:103] Successfully created a docker volume addons-920118
	I1216 04:25:37.444225   10343 cli_runner.go:164] Run: docker run --rm --name addons-920118-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-920118 --entrypoint /usr/bin/test -v addons-920118:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 04:25:44.280842   10343 cli_runner.go:217] Completed: docker run --rm --name addons-920118-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-920118 --entrypoint /usr/bin/test -v addons-920118:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (6.836576098s)
	I1216 04:25:44.280873   10343 oci.go:107] Successfully prepared a docker volume addons-920118
	I1216 04:25:44.280960   10343 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:44.280976   10343 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:25:44.281054   10343 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-920118:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:25:48.096502   10343 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-920118:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.815406369s)
	I1216 04:25:48.096536   10343 kic.go:203] duration metric: took 3.815558003s to extract preloaded images to volume ...
	W1216 04:25:48.096637   10343 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 04:25:48.096671   10343 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 04:25:48.096709   10343 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:25:48.157180   10343 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-920118 --name addons-920118 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-920118 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-920118 --network addons-920118 --ip 192.168.49.2 --volume addons-920118:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 04:25:48.452062   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Running}}
	I1216 04:25:48.471658   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:25:48.490094   10343 cli_runner.go:164] Run: docker exec addons-920118 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:25:48.542353   10343 oci.go:144] the created container "addons-920118" has a running status.
	I1216 04:25:48.542389   10343 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa...
	I1216 04:25:48.627941   10343 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:25:48.652189   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:25:48.670781   10343 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:25:48.670806   10343 kic_runner.go:114] Args: [docker exec --privileged addons-920118 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:25:48.716085   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:25:48.737416   10343 machine.go:94] provisionDockerMachine start ...
	I1216 04:25:48.737528   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:48.762882   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:48.763237   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:48.763264   10343 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:25:48.764805   10343 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34766->127.0.0.1:32768: read: connection reset by peer
	I1216 04:25:51.890870   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-920118
	
	I1216 04:25:51.890894   10343 ubuntu.go:182] provisioning hostname "addons-920118"
	I1216 04:25:51.890989   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:51.908485   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:51.908747   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:51.908761   10343 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-920118 && echo "addons-920118" | sudo tee /etc/hostname
	I1216 04:25:52.042142   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-920118
	
	I1216 04:25:52.042218   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.060680   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:52.060888   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:52.060903   10343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-920118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-920118/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-920118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:25:52.185763   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:25:52.185789   10343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 04:25:52.185812   10343 ubuntu.go:190] setting up certificates
	I1216 04:25:52.185827   10343 provision.go:84] configureAuth start
	I1216 04:25:52.185896   10343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-920118
	I1216 04:25:52.203118   10343 provision.go:143] copyHostCerts
	I1216 04:25:52.203190   10343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 04:25:52.203314   10343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 04:25:52.203376   10343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 04:25:52.203426   10343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.addons-920118 san=[127.0.0.1 192.168.49.2 addons-920118 localhost minikube]
	I1216 04:25:52.280302   10343 provision.go:177] copyRemoteCerts
	I1216 04:25:52.280353   10343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:25:52.280386   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.297383   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:52.387974   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:25:52.406952   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 04:25:52.423669   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:25:52.439998   10343 provision.go:87] duration metric: took 254.147423ms to configureAuth
	I1216 04:25:52.440038   10343 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:25:52.440203   10343 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:25:52.440318   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.457699   10343 main.go:143] libmachine: Using SSH client type: native
	I1216 04:25:52.457964   10343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1216 04:25:52.457982   10343 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:25:52.720646   10343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:25:52.720669   10343 machine.go:97] duration metric: took 3.983227082s to provisionDockerMachine
	I1216 04:25:52.720680   10343 client.go:176] duration metric: took 15.569851959s to LocalClient.Create
	I1216 04:25:52.720699   10343 start.go:167] duration metric: took 15.569910426s to libmachine.API.Create "addons-920118"
	I1216 04:25:52.720709   10343 start.go:293] postStartSetup for "addons-920118" (driver="docker")
	I1216 04:25:52.720722   10343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:25:52.720776   10343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:25:52.720821   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.738407   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:52.831914   10343 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:25:52.835309   10343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:25:52.835335   10343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:25:52.835345   10343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 04:25:52.835395   10343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 04:25:52.835418   10343 start.go:296] duration metric: took 114.703208ms for postStartSetup
	I1216 04:25:52.835690   10343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-920118
	I1216 04:25:52.852813   10343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/config.json ...
	I1216 04:25:52.853106   10343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:25:52.853145   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.870197   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:52.957892   10343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:25:52.962170   10343 start.go:128] duration metric: took 15.813382554s to createHost
	I1216 04:25:52.962215   10343 start.go:83] releasing machines lock for "addons-920118", held for 15.813541626s
	I1216 04:25:52.962285   10343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-920118
	I1216 04:25:52.979699   10343 ssh_runner.go:195] Run: cat /version.json
	I1216 04:25:52.979746   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:52.979799   10343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:25:52.979866   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:25:53.000266   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:53.000267   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:25:53.142592   10343 ssh_runner.go:195] Run: systemctl --version
	I1216 04:25:53.148701   10343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:25:53.183626   10343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:25:53.188170   10343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:25:53.188242   10343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:25:53.213072   10343 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 04:25:53.213092   10343 start.go:496] detecting cgroup driver to use...
	I1216 04:25:53.213120   10343 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 04:25:53.213171   10343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:25:53.228428   10343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:25:53.240521   10343 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:25:53.240583   10343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:25:53.255818   10343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:25:53.273135   10343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:25:53.356263   10343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:25:53.439475   10343 docker.go:234] disabling docker service ...
	I1216 04:25:53.439535   10343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:25:53.457365   10343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:25:53.469667   10343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:25:53.550013   10343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:25:53.628535   10343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:25:53.640364   10343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:25:53.653242   10343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:25:53.653293   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.663404   10343 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 04:25:53.663457   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.672117   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.680079   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.688819   10343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:25:53.696770   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.705612   10343 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.718875   10343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:25:53.727101   10343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:25:53.733941   10343 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 04:25:53.733982   10343 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 04:25:53.745968   10343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:25:53.754888   10343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:25:53.834062   10343 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:25:53.969642   10343 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:25:53.969726   10343 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:25:53.973558   10343 start.go:564] Will wait 60s for crictl version
	I1216 04:25:53.973618   10343 ssh_runner.go:195] Run: which crictl
	I1216 04:25:53.977126   10343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:25:54.001480   10343 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:25:54.001610   10343 ssh_runner.go:195] Run: crio --version
	I1216 04:25:54.029484   10343 ssh_runner.go:195] Run: crio --version
	I1216 04:25:54.059887   10343 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 04:25:54.060989   10343 cli_runner.go:164] Run: docker network inspect addons-920118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:25:54.078626   10343 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:25:54.082630   10343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:25:54.092334   10343 kubeadm.go:884] updating cluster {Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:25:54.092442   10343 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:54.092495   10343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:25:54.123722   10343 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:25:54.123743   10343 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:25:54.123785   10343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:25:54.148910   10343 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:25:54.148931   10343 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:25:54.148939   10343 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 04:25:54.149047   10343 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-920118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:25:54.149115   10343 ssh_runner.go:195] Run: crio config
	I1216 04:25:54.192561   10343 cni.go:84] Creating CNI manager for ""
	I1216 04:25:54.192591   10343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:25:54.192606   10343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:25:54.192628   10343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-920118 NodeName:addons-920118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:25:54.192748   10343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-920118"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:25:54.192807   10343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 04:25:54.200501   10343 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:25:54.200569   10343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:25:54.207908   10343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 04:25:54.220096   10343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 04:25:54.234558   10343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 04:25:54.246381   10343 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:25:54.249615   10343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:25:54.259147   10343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:25:54.331825   10343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:25:54.357421   10343 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118 for IP: 192.168.49.2
	I1216 04:25:54.357447   10343 certs.go:195] generating shared ca certs ...
	I1216 04:25:54.357465   10343 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.357601   10343 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 04:25:54.540247   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt ...
	I1216 04:25:54.540276   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt: {Name:mk29534da1360337bfbee56ec9908cf197b6a751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.540456   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key ...
	I1216 04:25:54.540466   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key: {Name:mk6b443e82720727e2eac180e3464d924488c446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.540536   10343 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 04:25:54.597807   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt ...
	I1216 04:25:54.597835   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt: {Name:mk7e765cfabf8a69146f9a1605dc7c1b7ad36407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.597998   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key ...
	I1216 04:25:54.598011   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key: {Name:mkcc6a4851601e8ab1dfec327bd3214ef09d13d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.598093   10343 certs.go:257] generating profile certs ...
	I1216 04:25:54.598145   10343 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.key
	I1216 04:25:54.598158   10343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt with IP's: []
	I1216 04:25:54.736851   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt ...
	I1216 04:25:54.736884   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: {Name:mk13e163a196d53ed0dd1104bdd79cad8883d4ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.737075   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.key ...
	I1216 04:25:54.737088   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.key: {Name:mk4910ee3534a83b82f4bac6b5f9a0531c23ca30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.737163   10343 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a
	I1216 04:25:54.737182   10343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 04:25:54.845968   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a ...
	I1216 04:25:54.846002   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a: {Name:mkb3cd644d39010340a045d4cda0c63bb87485c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.846199   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a ...
	I1216 04:25:54.846214   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a: {Name:mkb9634b38fd8e921118fb77a1d7725c1708ee92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.846299   10343 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt.842ef08a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt
	I1216 04:25:54.846378   10343 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key.842ef08a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key
	I1216 04:25:54.846483   10343 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key
	I1216 04:25:54.846508   10343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt with IP's: []
	I1216 04:25:54.914645   10343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt ...
	I1216 04:25:54.914678   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt: {Name:mk2d5a1b6fdc389f6f1e6b1f234405f60337d44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.914860   10343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key ...
	I1216 04:25:54.914873   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key: {Name:mk12b693a31445ee37043a8da8505d9712041382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:54.915083   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:25:54.915126   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:25:54.915156   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:25:54.915182   10343 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 04:25:54.915828   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:25:54.934462   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:25:54.951197   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:25:54.967624   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 04:25:54.984244   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 04:25:55.000439   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:25:55.017028   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:25:55.034097   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:25:55.050419   10343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:25:55.069154   10343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:25:55.081142   10343 ssh_runner.go:195] Run: openssl version
	I1216 04:25:55.086858   10343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.093779   10343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:25:55.103730   10343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.107590   10343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.107638   10343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:25:55.141566   10343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:25:55.149314   10343 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:25:55.156729   10343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:25:55.160184   10343 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:25:55.160233   10343 kubeadm.go:401] StartCluster: {Name:addons-920118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-920118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:25:55.160335   10343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:25:55.160379   10343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:25:55.186913   10343 cri.go:89] found id: ""
	I1216 04:25:55.186992   10343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:25:55.195043   10343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:25:55.203106   10343 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:25:55.203163   10343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:25:55.210271   10343 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:25:55.210291   10343 kubeadm.go:158] found existing configuration files:
	
	I1216 04:25:55.210333   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:25:55.217417   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:25:55.217485   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:25:55.224646   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:25:55.231696   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:25:55.231765   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:25:55.238545   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:25:55.245603   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:25:55.245661   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:25:55.253005   10343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:25:55.260403   10343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:25:55.260448   10343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:25:55.268111   10343 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:25:55.324561   10343 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 04:25:55.380198   10343 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:26:04.213886   10343 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 04:26:04.213938   10343 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:26:04.214066   10343 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:26:04.214167   10343 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 04:26:04.214234   10343 kubeadm.go:319] OS: Linux
	I1216 04:26:04.214312   10343 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:26:04.214380   10343 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:26:04.214422   10343 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:26:04.214461   10343 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:26:04.214504   10343 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:26:04.214542   10343 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:26:04.214587   10343 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:26:04.214629   10343 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 04:26:04.214690   10343 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:26:04.214781   10343 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:26:04.214920   10343 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:26:04.215039   10343 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:26:04.217766   10343 out.go:252]   - Generating certificates and keys ...
	I1216 04:26:04.217858   10343 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:26:04.217938   10343 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:26:04.218076   10343 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:26:04.218176   10343 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:26:04.218249   10343 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:26:04.218290   10343 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:26:04.218340   10343 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:26:04.218472   10343 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-920118 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:26:04.218545   10343 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:26:04.218685   10343 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-920118 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:26:04.218794   10343 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:26:04.218892   10343 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:26:04.218956   10343 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:26:04.219062   10343 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:26:04.219128   10343 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:26:04.219208   10343 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:26:04.219253   10343 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:26:04.219349   10343 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:26:04.219409   10343 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:26:04.219474   10343 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:26:04.219527   10343 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:26:04.220769   10343 out.go:252]   - Booting up control plane ...
	I1216 04:26:04.220871   10343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:26:04.220953   10343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:26:04.221047   10343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:26:04.221148   10343 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:26:04.221258   10343 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:26:04.221389   10343 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:26:04.221478   10343 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:26:04.221535   10343 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:26:04.221717   10343 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:26:04.221865   10343 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:26:04.221942   10343 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00146928s
	I1216 04:26:04.222087   10343 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 04:26:04.222215   10343 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 04:26:04.222326   10343 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 04:26:04.222451   10343 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 04:26:04.222557   10343 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.241910337s
	I1216 04:26:04.222642   10343 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.935449434s
	I1216 04:26:04.222724   10343 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501982481s
	I1216 04:26:04.222882   10343 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 04:26:04.223009   10343 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 04:26:04.223076   10343 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 04:26:04.223280   10343 kubeadm.go:319] [mark-control-plane] Marking the node addons-920118 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 04:26:04.223368   10343 kubeadm.go:319] [bootstrap-token] Using token: ts7uag.97mgvhceg9rw9a8w
	I1216 04:26:04.224900   10343 out.go:252]   - Configuring RBAC rules ...
	I1216 04:26:04.225049   10343 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 04:26:04.225140   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 04:26:04.225302   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 04:26:04.225461   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 04:26:04.225562   10343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 04:26:04.225630   10343 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 04:26:04.225768   10343 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 04:26:04.225821   10343 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 04:26:04.225889   10343 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 04:26:04.225897   10343 kubeadm.go:319] 
	I1216 04:26:04.225947   10343 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 04:26:04.225953   10343 kubeadm.go:319] 
	I1216 04:26:04.226036   10343 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 04:26:04.226044   10343 kubeadm.go:319] 
	I1216 04:26:04.226065   10343 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 04:26:04.226123   10343 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 04:26:04.226200   10343 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 04:26:04.226209   10343 kubeadm.go:319] 
	I1216 04:26:04.226292   10343 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 04:26:04.226302   10343 kubeadm.go:319] 
	I1216 04:26:04.226374   10343 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 04:26:04.226383   10343 kubeadm.go:319] 
	I1216 04:26:04.226446   10343 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 04:26:04.226525   10343 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 04:26:04.226628   10343 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 04:26:04.226635   10343 kubeadm.go:319] 
	I1216 04:26:04.226735   10343 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 04:26:04.226836   10343 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 04:26:04.226845   10343 kubeadm.go:319] 
	I1216 04:26:04.226918   10343 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ts7uag.97mgvhceg9rw9a8w \
	I1216 04:26:04.227052   10343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 04:26:04.227085   10343 kubeadm.go:319] 	--control-plane 
	I1216 04:26:04.227092   10343 kubeadm.go:319] 
	I1216 04:26:04.227216   10343 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 04:26:04.227227   10343 kubeadm.go:319] 
	I1216 04:26:04.227336   10343 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ts7uag.97mgvhceg9rw9a8w \
	I1216 04:26:04.227491   10343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 04:26:04.227511   10343 cni.go:84] Creating CNI manager for ""
	I1216 04:26:04.227520   10343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:26:04.228974   10343 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 04:26:04.230086   10343 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 04:26:04.234344   10343 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 04:26:04.234359   10343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 04:26:04.247361   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 04:26:04.443014   10343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 04:26:04.443287   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-920118 minikube.k8s.io/updated_at=2025_12_16T04_26_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=addons-920118 minikube.k8s.io/primary=true
	I1216 04:26:04.443287   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:04.454403   10343 ops.go:34] apiserver oom_adj: -16
	I1216 04:26:04.528477   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:05.028714   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:05.529176   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:06.028718   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:06.528807   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:07.028557   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:07.529504   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:08.029377   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:08.528523   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:09.029464   10343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:09.094694   10343 kubeadm.go:1114] duration metric: took 4.651462414s to wait for elevateKubeSystemPrivileges
	I1216 04:26:09.094734   10343 kubeadm.go:403] duration metric: took 13.934502954s to StartCluster
	I1216 04:26:09.094754   10343 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:09.094876   10343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:26:09.095328   10343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:09.095517   10343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 04:26:09.095529   10343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:26:09.095614   10343 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 04:26:09.095720   10343 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:26:09.095744   10343 addons.go:70] Setting yakd=true in profile "addons-920118"
	I1216 04:26:09.095766   10343 addons.go:239] Setting addon yakd=true in "addons-920118"
	I1216 04:26:09.095772   10343 addons.go:70] Setting gcp-auth=true in profile "addons-920118"
	I1216 04:26:09.095781   10343 addons.go:70] Setting metrics-server=true in profile "addons-920118"
	I1216 04:26:09.095766   10343 addons.go:70] Setting default-storageclass=true in profile "addons-920118"
	I1216 04:26:09.095797   10343 addons.go:239] Setting addon metrics-server=true in "addons-920118"
	I1216 04:26:09.095800   10343 mustload.go:66] Loading cluster: addons-920118
	I1216 04:26:09.095806   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.095815   10343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-920118"
	I1216 04:26:09.095826   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.095908   10343 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-920118"
	I1216 04:26:09.095945   10343 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-920118"
	I1216 04:26:09.095976   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.096014   10343 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:26:09.096133   10343 addons.go:70] Setting cloud-spanner=true in profile "addons-920118"
	I1216 04:26:09.096205   10343 addons.go:239] Setting addon cloud-spanner=true in "addons-920118"
	I1216 04:26:09.096240   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096251   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.096291   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096388   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096397   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096457   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.096593   10343 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-920118"
	I1216 04:26:09.096639   10343 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-920118"
	I1216 04:26:09.096671   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.096673   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.097704   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.095774   10343 addons.go:70] Setting registry-creds=true in profile "addons-920118"
	I1216 04:26:09.098308   10343 addons.go:239] Setting addon registry-creds=true in "addons-920118"
	I1216 04:26:09.098352   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.098742   10343 out.go:179] * Verifying Kubernetes components...
	I1216 04:26:09.098784   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.097761   10343 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-920118"
	I1216 04:26:09.099127   10343 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-920118"
	I1216 04:26:09.099156   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.103182   10343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:26:09.095763   10343 addons.go:70] Setting inspektor-gadget=true in profile "addons-920118"
	I1216 04:26:09.103452   10343 addons.go:239] Setting addon inspektor-gadget=true in "addons-920118"
	I1216 04:26:09.103511   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097780   10343 addons.go:70] Setting storage-provisioner=true in profile "addons-920118"
	I1216 04:26:09.103647   10343 addons.go:239] Setting addon storage-provisioner=true in "addons-920118"
	I1216 04:26:09.103794   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097790   10343 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-920118"
	I1216 04:26:09.104123   10343 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-920118"
	I1216 04:26:09.097804   10343 addons.go:70] Setting volcano=true in profile "addons-920118"
	I1216 04:26:09.104443   10343 addons.go:239] Setting addon volcano=true in "addons-920118"
	I1216 04:26:09.104491   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.104545   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097812   10343 addons.go:70] Setting volumesnapshots=true in profile "addons-920118"
	I1216 04:26:09.104721   10343 addons.go:239] Setting addon volumesnapshots=true in "addons-920118"
	I1216 04:26:09.104749   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.105037   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.105196   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.105232   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.097822   10343 addons.go:70] Setting registry=true in profile "addons-920118"
	I1216 04:26:09.097852   10343 addons.go:70] Setting ingress-dns=true in profile "addons-920118"
	I1216 04:26:09.104051   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.105587   10343 addons.go:239] Setting addon registry=true in "addons-920118"
	I1216 04:26:09.105618   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.105905   10343 addons.go:239] Setting addon ingress-dns=true in "addons-920118"
	I1216 04:26:09.106004   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.097846   10343 addons.go:70] Setting ingress=true in profile "addons-920118"
	I1216 04:26:09.106065   10343 addons.go:239] Setting addon ingress=true in "addons-920118"
	I1216 04:26:09.106098   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.106444   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.106569   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.106962   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.118450   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.146871   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 04:26:09.148309   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 04:26:09.150645   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 04:26:09.151917   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 04:26:09.153251   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 04:26:09.154432   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 04:26:09.156794   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 04:26:09.159056   10343 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 04:26:09.160461   10343 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:26:09.160480   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 04:26:09.160551   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.161133   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 04:26:09.163570   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 04:26:09.165612   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 04:26:09.166864   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.167374   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.170713   10343 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 04:26:09.171489   10343 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 04:26:09.171515   10343 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 04:26:09.172629   10343 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 04:26:09.172643   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 04:26:09.172701   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.173165   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 04:26:09.173179   10343 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 04:26:09.173235   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.173301   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 04:26:09.173308   10343 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 04:26:09.173342   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	W1216 04:26:09.193411   10343 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 04:26:09.194489   10343 addons.go:239] Setting addon default-storageclass=true in "addons-920118"
	I1216 04:26:09.195348   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.196151   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.196218   10343 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 04:26:09.197156   10343 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 04:26:09.198056   10343 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:26:09.198078   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 04:26:09.198144   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.203389   10343 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:26:09.203410   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 04:26:09.203458   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.203563   10343 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 04:26:09.203916   10343 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-920118"
	I1216 04:26:09.204133   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:09.205657   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:09.205981   10343 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:26:09.206049   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 04:26:09.206113   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.216063   10343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:26:09.217576   10343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:26:09.218669   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:26:09.218748   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.231512   10343 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 04:26:09.232604   10343 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:26:09.234111   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 04:26:09.234835   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.237779   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.240980   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:09.244063   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 04:26:09.245970   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:09.247400   10343 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:26:09.248319   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 04:26:09.248430   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.248200   10343 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 04:26:09.249674   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 04:26:09.250251   10343 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 04:26:09.250342   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.254875   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.258680   10343 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 04:26:09.260227   10343 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 04:26:09.261995   10343 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 04:26:09.262100   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 04:26:09.262173   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.269060   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.280180   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.280302   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.281051   10343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 04:26:09.282756   10343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:26:09.282773   10343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:26:09.282826   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.283918   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.286372   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.303490   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.306112   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.308930   10343 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 04:26:09.310469   10343 out.go:179]   - Using image docker.io/busybox:stable
	I1216 04:26:09.311124   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.311754   10343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:26:09.311773   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 04:26:09.311831   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:09.314132   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.317213   10343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1216 04:26:09.325169   10343 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:26:09.325205   10343 retry.go:31] will retry after 206.507609ms: ssh: handshake failed: EOF
	I1216 04:26:09.325694   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.334557   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.343270   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:09.359213   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	W1216 04:26:09.361904   10343 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:26:09.361975   10343 retry.go:31] will retry after 133.463129ms: ssh: handshake failed: EOF
	I1216 04:26:09.432946   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:26:09.433274   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:26:09.444407   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 04:26:09.444439   10343 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 04:26:09.445750   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 04:26:09.451457   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 04:26:09.451480   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 04:26:09.467823   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 04:26:09.467857   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 04:26:09.470261   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 04:26:09.470285   10343 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 04:26:09.489162   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 04:26:09.489236   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 04:26:09.493394   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:26:09.493394   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:26:09.494942   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:26:09.507967   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 04:26:09.508013   10343 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 04:26:09.509471   10343 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 04:26:09.509498   10343 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 04:26:09.511477   10343 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 04:26:09.511503   10343 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 04:26:09.511800   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:26:09.520537   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:26:09.521923   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 04:26:09.521942   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 04:26:09.522766   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 04:26:09.522780   10343 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 04:26:09.550656   10343 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 04:26:09.550682   10343 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 04:26:09.560861   10343 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:26:09.560887   10343 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 04:26:09.561919   10343 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:26:09.561942   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 04:26:09.563281   10343 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:26:09.563297   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 04:26:09.565091   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 04:26:09.565105   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 04:26:09.605712   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:26:09.610139   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 04:26:09.610172   10343 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 04:26:09.617936   10343 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 04:26:09.617961   10343 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 04:26:09.653345   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:26:09.659622   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:26:09.660410   10343 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 04:26:09.660473   10343 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 04:26:09.686914   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 04:26:09.686936   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 04:26:09.711606   10343 node_ready.go:35] waiting up to 6m0s for node "addons-920118" to be "Ready" ...
	I1216 04:26:09.711858   10343 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 04:26:09.733427   10343 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:09.733446   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 04:26:09.760180   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 04:26:09.760209   10343 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 04:26:09.763355   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:26:09.768944   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:26:09.785946   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:09.808497   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 04:26:09.808530   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 04:26:09.884279   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 04:26:09.884309   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 04:26:09.952732   10343 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:26:09.952778   10343 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 04:26:10.002649   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:26:10.230227   10343 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-920118" context rescaled to 1 replicas
	I1216 04:26:10.443325   10343 addons.go:495] Verifying addon registry=true in "addons-920118"
	I1216 04:26:10.443426   10343 addons.go:495] Verifying addon metrics-server=true in "addons-920118"
	I1216 04:26:10.445007   10343 out.go:179] * Verifying registry addon...
	I1216 04:26:10.453444   10343 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 04:26:10.457034   10343 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:26:10.457058   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:10.475269   10343 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-920118 service yakd-dashboard -n yakd-dashboard
	
	I1216 04:26:10.746233   10343 addons.go:495] Verifying addon ingress=true in "addons-920118"
	I1216 04:26:10.747769   10343 out.go:179] * Verifying ingress addon...
	I1216 04:26:10.750207   10343 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 04:26:10.752850   10343 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 04:26:10.752869   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:10.956932   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:11.113511   10343 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.327516819s)
	W1216 04:26:11.113550   10343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:26:11.113581   10343 retry.go:31] will retry after 196.988358ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:26:11.113788   10343 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.111091655s)
	I1216 04:26:11.113821   10343 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-920118"
	I1216 04:26:11.115494   10343 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 04:26:11.118943   10343 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 04:26:11.121280   10343 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:26:11.121304   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:11.253696   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:11.310775   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:11.457538   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:11.621550   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:11.715011   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:11.753759   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:11.956383   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:12.122369   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:12.253342   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:12.456823   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:12.622556   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:12.753452   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:12.956488   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:13.122444   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:13.252512   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:13.456705   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:13.622213   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:13.753533   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:13.775886   10343 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.465069783s)
	I1216 04:26:13.957191   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:14.122798   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:14.214904   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:14.254207   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:14.456672   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:14.621764   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:14.752971   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:14.957109   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:15.122713   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:15.253541   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:15.457072   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:15.622665   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:15.753892   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:15.957108   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:16.123009   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:16.252929   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:16.456925   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:16.622153   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:16.714478   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:16.752830   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:16.795089   10343 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 04:26:16.795150   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:16.812594   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:16.910740   10343 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 04:26:16.923575   10343 addons.go:239] Setting addon gcp-auth=true in "addons-920118"
	I1216 04:26:16.923625   10343 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:26:16.923947   10343 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:26:16.940686   10343 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 04:26:16.940737   10343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:26:16.956873   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:16.958572   10343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:26:17.049787   10343 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 04:26:17.051358   10343 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:17.052425   10343 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 04:26:17.052439   10343 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 04:26:17.066253   10343 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 04:26:17.066276   10343 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 04:26:17.079687   10343 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:26:17.079708   10343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 04:26:17.092361   10343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:26:17.122086   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:17.253221   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:17.385523   10343 addons.go:495] Verifying addon gcp-auth=true in "addons-920118"
	I1216 04:26:17.387747   10343 out.go:179] * Verifying gcp-auth addon...
	I1216 04:26:17.389442   10343 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 04:26:17.391860   10343 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 04:26:17.391884   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:17.492514   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:17.622192   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:17.753006   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:17.892517   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:17.957263   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:18.122289   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:18.253352   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:18.392882   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:18.456428   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:18.621939   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:18.714629   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:18.753088   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:18.892581   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:18.956111   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:19.121572   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:19.253802   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:19.392632   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:19.456881   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:19.621481   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:19.753400   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:19.892991   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:19.956528   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:20.122315   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:20.253463   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:20.393094   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:20.456498   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:20.622072   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:20.752698   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:20.892169   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:20.956455   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:21.122187   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:21.214517   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:21.253059   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:21.392454   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:21.457095   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:21.621591   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:21.753450   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:21.892131   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:21.956462   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:22.122127   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:22.252827   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:22.392299   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:22.456819   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:22.623770   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:22.753968   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:22.892469   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:22.955943   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:23.121488   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:23.214826   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:23.253343   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:23.392715   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:23.456490   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:23.622035   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:23.753088   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:23.892592   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:23.956695   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:24.122355   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:24.253377   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:24.392139   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:24.456411   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:24.622403   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:24.753495   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:24.892891   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:24.956436   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:25.122167   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:25.253142   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:25.392873   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:25.456324   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:25.621820   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:25.713975   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:25.753596   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:25.891928   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:25.956261   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:26.121984   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:26.252764   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:26.392361   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:26.456724   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:26.622496   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:26.753979   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:26.892477   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:26.956893   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:27.121455   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:27.253424   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:27.392940   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:27.456393   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:27.621940   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:27.714364   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:27.752705   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:27.892586   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:27.955804   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:28.122392   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:28.253453   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:28.392099   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:28.456660   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:28.622438   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:28.753460   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:28.893132   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:28.956593   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:29.122151   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:29.253245   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:29.392967   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:29.456315   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:29.623944   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:29.714479   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:29.752946   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:29.892305   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:29.956882   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:30.121511   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:30.253054   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:30.392506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:30.456900   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:30.622548   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:30.753439   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:30.893439   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:30.956613   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:31.122049   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:31.252668   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:31.392476   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:31.465523   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:31.622344   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:31.714657   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:31.753214   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:31.892805   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:31.956316   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:32.122093   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:32.254404   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:32.392914   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:32.456530   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:32.622577   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:32.753198   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:32.892578   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:32.955682   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:33.122045   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:33.252862   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:33.392273   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:33.456676   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:33.622334   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:33.753077   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:33.892468   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:33.956744   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:34.123127   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:34.214559   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:34.253122   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:34.392880   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:34.456351   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:34.622059   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:34.752799   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:34.892829   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:34.956072   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:35.121688   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:35.253390   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:35.392879   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:35.456172   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:35.621976   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:35.753627   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:35.891928   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:35.956067   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:36.121506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:36.214860   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:36.253230   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:36.392728   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:36.456203   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:36.621682   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:36.753676   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:36.892187   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:36.956538   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:37.122173   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:37.253315   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:37.393125   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:37.456308   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:37.622087   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:37.753572   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:37.892124   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:37.956648   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:38.123859   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:38.252872   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:38.392561   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:38.455735   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:38.622305   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:38.714783   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:38.753582   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:38.891888   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:38.956506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:39.122045   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:39.253039   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:39.393150   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:39.456570   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:39.622102   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:39.753329   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:39.892800   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:39.956180   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:40.121988   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:40.252835   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:40.392333   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:40.457154   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:40.621736   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:40.753471   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:40.891989   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:40.956345   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:41.122041   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:41.214392   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:41.252721   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:41.392327   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:41.456703   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:41.622283   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:41.753266   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:41.892668   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:41.956109   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:42.121710   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:42.253641   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:42.391917   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:42.456546   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:42.622514   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:42.753612   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:42.891837   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:42.956343   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:43.121954   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:43.252591   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:43.392045   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:43.456245   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:43.621674   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:43.713937   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:43.753391   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:43.892858   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:43.956245   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:44.121889   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:44.253034   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:44.392594   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:44.456046   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:44.621572   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:44.753476   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:44.892918   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:44.956287   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:45.121791   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:45.253509   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:45.391875   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:45.456304   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:45.621824   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:45.714298   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:45.752709   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:45.892260   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:45.956656   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:46.122095   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:46.253099   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:46.392642   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:46.456033   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:46.621527   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:46.753484   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:46.892696   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:46.956161   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:47.121854   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:47.253598   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:47.392184   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:47.456516   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:47.621990   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:47.714617   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:47.753136   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:47.892685   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:47.956177   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:48.121851   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:48.252719   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:48.392180   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:48.456572   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:48.622343   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:48.753363   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:48.892562   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:48.955871   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:49.122354   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:49.253578   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:49.392338   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:49.456706   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:49.622537   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:26:49.715036   10343 node_ready.go:57] node "addons-920118" has "Ready":"False" status (will retry)
	I1216 04:26:49.753689   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:49.892058   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:49.956347   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:50.122203   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:50.252857   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:50.392214   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:50.456735   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:50.622202   10343 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:26:50.622224   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:50.714226   10343 node_ready.go:49] node "addons-920118" is "Ready"
	I1216 04:26:50.714262   10343 node_ready.go:38] duration metric: took 41.002627295s for node "addons-920118" to be "Ready" ...
	I1216 04:26:50.714278   10343 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:26:50.714321   10343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:26:50.727476   10343 api_server.go:72] duration metric: took 41.63192077s to wait for apiserver process to appear ...
	I1216 04:26:50.727499   10343 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:26:50.727516   10343 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 04:26:50.732171   10343 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 04:26:50.733057   10343 api_server.go:141] control plane version: v1.34.2
	I1216 04:26:50.733079   10343 api_server.go:131] duration metric: took 5.574386ms to wait for apiserver health ...
	I1216 04:26:50.733088   10343 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 04:26:50.736580   10343 system_pods.go:59] 20 kube-system pods found
	I1216 04:26:50.736609   10343 system_pods.go:61] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:50.736617   10343 system_pods.go:61] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:50.736625   10343 system_pods.go:61] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:50.736630   10343 system_pods.go:61] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:50.736635   10343 system_pods.go:61] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:50.736641   10343 system_pods.go:61] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:50.736648   10343 system_pods.go:61] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:50.736651   10343 system_pods.go:61] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:50.736654   10343 system_pods.go:61] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:50.736659   10343 system_pods.go:61] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:50.736665   10343 system_pods.go:61] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:50.736668   10343 system_pods.go:61] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:50.736672   10343 system_pods.go:61] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:50.736677   10343 system_pods.go:61] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:50.736685   10343 system_pods.go:61] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:50.736690   10343 system_pods.go:61] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:50.736697   10343 system_pods.go:61] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:50.736704   10343 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.736710   10343 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.736716   10343 system_pods.go:61] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:50.736723   10343 system_pods.go:74] duration metric: took 3.630467ms to wait for pod list to return data ...
	I1216 04:26:50.736730   10343 default_sa.go:34] waiting for default service account to be created ...
	I1216 04:26:50.738453   10343 default_sa.go:45] found service account: "default"
	I1216 04:26:50.738471   10343 default_sa.go:55] duration metric: took 1.734758ms for default service account to be created ...
	I1216 04:26:50.738481   10343 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 04:26:50.742284   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:50.742318   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:50.742336   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:50.742349   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:50.742358   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:50.742444   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:50.742463   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:50.742478   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:50.742490   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:50.742497   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:50.742512   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:50.742523   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:50.742539   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:50.742548   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:50.742559   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:50.742575   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:50.742589   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:50.742613   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:50.742627   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.742635   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.742647   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:50.742663   10343 retry.go:31] will retry after 228.794687ms: missing components: kube-dns
	I1216 04:26:50.752566   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:50.893282   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:50.993298   10343 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:26:50.993322   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:50.994536   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:50.994563   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:50.994570   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:50.994577   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:50.994585   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:50.994590   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:50.994594   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:50.994598   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:50.994603   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:50.994607   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:50.994613   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:50.994618   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:50.994621   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:50.994626   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:50.994633   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:50.994638   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:50.994646   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:50.994651   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:50.994657   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.994663   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:50.994669   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:50.994681   10343 retry.go:31] will retry after 303.0198ms: missing components: kube-dns
	I1216 04:26:51.121983   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:51.253728   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:51.356158   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:51.356194   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:51.356202   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:26:51.356209   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:51.356215   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:51.356222   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:51.356227   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:51.356232   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:51.356236   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:51.356239   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:51.356245   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:51.356250   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:51.356256   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:51.356263   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:51.356269   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:51.356276   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:51.356281   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:51.356288   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:51.356293   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.356299   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.356306   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:26:51.356323   10343 retry.go:31] will retry after 362.929112ms: missing components: kube-dns
	I1216 04:26:51.391947   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:51.456758   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:51.623868   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:51.723437   10343 system_pods.go:86] 20 kube-system pods found
	I1216 04:26:51.723470   10343 system_pods.go:89] "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:26:51.723479   10343 system_pods.go:89] "coredns-66bc5c9577-znpvj" [a7b309b2-e72c-43f1-b902-0f490e9e4967] Running
	I1216 04:26:51.723491   10343 system_pods.go:89] "csi-hostpath-attacher-0" [5628bf89-dc6d-47d3-b0c3-990515ce71ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:26:51.723500   10343 system_pods.go:89] "csi-hostpath-resizer-0" [c96e85da-c270-4f8f-a034-539f819a347c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:26:51.723509   10343 system_pods.go:89] "csi-hostpathplugin-sm7qv" [c5c9ad87-6ec6-44b5-ad9e-a495ccdb106d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:26:51.723515   10343 system_pods.go:89] "etcd-addons-920118" [d69944c5-cfdb-4cb5-90b3-3cb8aff69aa9] Running
	I1216 04:26:51.723521   10343 system_pods.go:89] "kindnet-grqmf" [3b2bc9f2-28b2-429f-9883-3dc36e335734] Running
	I1216 04:26:51.723526   10343 system_pods.go:89] "kube-apiserver-addons-920118" [6b0b8924-31ea-43fd-bdb6-33e23a1b8e96] Running
	I1216 04:26:51.723531   10343 system_pods.go:89] "kube-controller-manager-addons-920118" [96727672-1e12-4b1b-a8ad-541a6e3564d5] Running
	I1216 04:26:51.723539   10343 system_pods.go:89] "kube-ingress-dns-minikube" [0d46d3ba-761c-41e1-a455-fe437a5804ae] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:26:51.723546   10343 system_pods.go:89] "kube-proxy-8wpnk" [554247b5-020a-45dd-b009-937e2fac8505] Running
	I1216 04:26:51.723550   10343 system_pods.go:89] "kube-scheduler-addons-920118" [4b5ac1ac-f4c7-46ce-8dc6-cb3816f9f4ad] Running
	I1216 04:26:51.723555   10343 system_pods.go:89] "metrics-server-85b7d694d7-49lmv" [af012cae-1c88-4ce3-afc2-969b5ff307e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:26:51.723565   10343 system_pods.go:89] "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:26:51.723571   10343 system_pods.go:89] "registry-6b586f9694-76pzp" [3dee72d8-ff23-418f-a824-a679edf7a2f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:26:51.723580   10343 system_pods.go:89] "registry-creds-764b6fb674-rblqj" [0738a306-d479-46ae-bb3b-4e49e6bc1476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:26:51.723586   10343 system_pods.go:89] "registry-proxy-sh6dp" [ba891aa4-deae-402e-a64f-e2923fafd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:26:51.723590   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jwq5" [60fd9d6d-b8f9-42af-8535-92c77154226e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.723598   10343 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xnx2s" [e889cf59-f45f-4aa3-9ceb-465c6fd6967c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:26:51.723603   10343 system_pods.go:89] "storage-provisioner" [128f124f-ec1f-4539-831b-bd02efae583d] Running
	I1216 04:26:51.723611   10343 system_pods.go:126] duration metric: took 985.124018ms to wait for k8s-apps to be running ...
	I1216 04:26:51.723621   10343 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 04:26:51.723663   10343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:26:51.739330   10343 system_svc.go:56] duration metric: took 15.699551ms WaitForService to wait for kubelet
	I1216 04:26:51.739362   10343 kubeadm.go:587] duration metric: took 42.643807434s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:26:51.739385   10343 node_conditions.go:102] verifying NodePressure condition ...
	I1216 04:26:51.741797   10343 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 04:26:51.741819   10343 node_conditions.go:123] node cpu capacity is 8
	I1216 04:26:51.741837   10343 node_conditions.go:105] duration metric: took 2.445119ms to run NodePressure ...
	I1216 04:26:51.741848   10343 start.go:242] waiting for startup goroutines ...
	I1216 04:26:51.753185   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:51.892788   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:51.956779   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:52.123150   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:52.254362   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:52.393527   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:52.456335   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:52.623114   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:52.753725   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:52.892497   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:52.957406   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:53.123305   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:53.254174   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:53.392845   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:53.456559   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:53.622846   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:53.753935   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:53.892522   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:53.957487   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:54.122776   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:54.253795   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:54.393411   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:54.457179   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:54.622503   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:54.753434   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:54.893443   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:54.957363   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:55.123225   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:55.253782   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:55.392402   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:55.457527   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:55.622504   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:55.753670   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:55.892342   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:55.957326   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:56.122372   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:56.252880   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:56.392624   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:56.456416   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:56.623083   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:56.753593   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:56.892333   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:56.956778   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:57.123401   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:57.253232   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:57.392452   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:57.456919   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:57.622274   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:57.754057   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:57.892778   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:57.956179   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:58.122700   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:58.255185   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:58.393496   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:58.457436   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:58.622512   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:58.752938   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:58.892633   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:58.956487   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:59.123165   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:59.254326   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:59.393059   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:59.456945   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:26:59.622595   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:26:59.753811   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:26:59.893062   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:26:59.956835   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:00.122985   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:00.253953   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:00.392829   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:00.456727   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:00.622180   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:00.753969   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:00.892563   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:00.956982   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:01.121991   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:01.253685   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:01.392295   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:01.457630   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:01.625363   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:01.754321   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:01.892615   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:01.957166   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:02.123493   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:02.254245   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:02.393498   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:02.457674   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:02.623726   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:02.753635   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:02.892751   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:02.956745   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:03.159506   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:03.254213   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:03.393106   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:03.457240   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:03.621912   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:03.753746   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:03.892480   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:03.956428   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:04.123076   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:04.253789   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:04.393118   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:04.457202   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:04.622773   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:04.753548   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:04.915651   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:04.956394   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:05.146913   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:05.253827   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:05.392752   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:05.456536   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:05.622634   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:05.756422   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:05.892959   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:05.956690   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:06.123197   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:06.253633   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:06.392458   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:06.493452   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:06.623774   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:06.753271   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:06.892831   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:06.956800   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:07.123348   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:07.254912   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:07.393692   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:07.457170   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:07.622211   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:07.753829   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:07.892881   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:07.957043   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:08.122175   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:08.254320   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:08.392630   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:08.456013   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:08.622103   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:08.753349   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:08.893830   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:08.995588   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:09.122645   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:09.253267   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:09.393548   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:09.457248   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:09.622532   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:09.753661   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:09.892434   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:09.956824   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:10.122995   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:10.253811   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:10.392345   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:10.457376   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:10.622155   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:10.754084   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:10.892884   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:10.956573   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:11.124633   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:11.253599   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:11.392448   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:11.457164   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:11.622073   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:11.755324   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:11.893241   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:11.956565   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:12.122307   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:12.253856   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:12.392115   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:12.456729   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:12.622677   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:12.754155   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:12.893478   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:12.957238   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:13.122625   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:13.253451   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:13.393328   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:13.456973   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:13.621691   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:13.832052   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:13.892314   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:13.989385   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:14.121949   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:14.253952   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:14.393860   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:14.457918   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:14.624881   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:14.754645   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:14.893867   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:14.956713   10343 kapi.go:107] duration metric: took 1m4.503265783s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 04:27:15.124160   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:15.254352   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:15.392743   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:15.622840   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:15.754060   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:15.892941   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:16.122671   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:16.253146   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:16.392666   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:16.623308   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:16.754214   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:16.892894   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:17.123676   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:17.253734   10343 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:17.391983   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:17.622207   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:17.754709   10343 kapi.go:107] duration metric: took 1m7.004500189s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 04:27:17.893903   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:18.122168   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:18.393515   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:18.623101   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:18.940144   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:19.156294   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:19.392720   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:19.623174   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:19.892944   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:20.121876   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:20.392740   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:20.623198   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:20.893138   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:21.121950   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:21.393149   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:21.622238   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:21.892820   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:22.123123   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:22.392951   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:22.652382   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:22.893000   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:23.121896   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:23.392639   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:23.623476   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:23.893238   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:24.121948   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:24.392982   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:24.622070   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:24.893061   10343 kapi.go:107] duration metric: took 1m7.503614245s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 04:27:24.894422   10343 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-920118 cluster.
	I1216 04:27:24.895900   10343 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 04:27:24.897326   10343 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 04:27:25.123598   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:25.622998   10343 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:26.122272   10343 kapi.go:107] duration metric: took 1m15.003326955s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 04:27:26.123947   10343 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, cloud-spanner, default-storageclass, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1216 04:27:26.125124   10343 addons.go:530] duration metric: took 1m17.029513661s for enable addons: enabled=[nvidia-device-plugin registry-creds cloud-spanner default-storageclass amd-gpu-device-plugin inspektor-gadget ingress-dns metrics-server storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1216 04:27:26.125167   10343 start.go:247] waiting for cluster config update ...
	I1216 04:27:26.125194   10343 start.go:256] writing updated cluster config ...
	I1216 04:27:26.125522   10343 ssh_runner.go:195] Run: rm -f paused
	I1216 04:27:26.129322   10343 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:27:26.132085   10343 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-znpvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.135726   10343 pod_ready.go:94] pod "coredns-66bc5c9577-znpvj" is "Ready"
	I1216 04:27:26.135744   10343 pod_ready.go:86] duration metric: took 3.639958ms for pod "coredns-66bc5c9577-znpvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.137467   10343 pod_ready.go:83] waiting for pod "etcd-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.140670   10343 pod_ready.go:94] pod "etcd-addons-920118" is "Ready"
	I1216 04:27:26.140689   10343 pod_ready.go:86] duration metric: took 3.20403ms for pod "etcd-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.142233   10343 pod_ready.go:83] waiting for pod "kube-apiserver-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.145341   10343 pod_ready.go:94] pod "kube-apiserver-addons-920118" is "Ready"
	I1216 04:27:26.145357   10343 pod_ready.go:86] duration metric: took 3.103983ms for pod "kube-apiserver-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.146846   10343 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.533741   10343 pod_ready.go:94] pod "kube-controller-manager-addons-920118" is "Ready"
	I1216 04:27:26.533782   10343 pod_ready.go:86] duration metric: took 386.910026ms for pod "kube-controller-manager-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:26.732980   10343 pod_ready.go:83] waiting for pod "kube-proxy-8wpnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.132461   10343 pod_ready.go:94] pod "kube-proxy-8wpnk" is "Ready"
	I1216 04:27:27.132495   10343 pod_ready.go:86] duration metric: took 399.487575ms for pod "kube-proxy-8wpnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.333584   10343 pod_ready.go:83] waiting for pod "kube-scheduler-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.733163   10343 pod_ready.go:94] pod "kube-scheduler-addons-920118" is "Ready"
	I1216 04:27:27.733198   10343 pod_ready.go:86] duration metric: took 399.591784ms for pod "kube-scheduler-addons-920118" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:27:27.733214   10343 pod_ready.go:40] duration metric: took 1.603864645s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:27:27.776559   10343 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 04:27:27.778482   10343 out.go:179] * Done! kubectl is now configured to use "addons-920118" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 04:27:25 addons-920118 crio[777]: time="2025-12-16T04:27:25.334913255Z" level=info msg="Starting container: 7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0" id=22b684d4-8c55-40ce-b1ac-17a1f90241ec name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 04:27:25 addons-920118 crio[777]: time="2025-12-16T04:27:25.337570361Z" level=info msg="Started container" PID=6114 containerID=7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0 description=kube-system/csi-hostpathplugin-sm7qv/csi-snapshotter id=22b684d4-8c55-40ce-b1ac-17a1f90241ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=cdf6d96b697c3bee29f01f530eae5944966261324b7ae5b0cab754cf72746a85
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.950911988Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a11a3507-c003-49a7-9091-7deb18f331f9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.950979062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.95687737Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:307dcc9e89198ab0c0020c5c001001f4e6e358c568f77f293a904277cd9c4fbe UID:bc7a543f-c0f3-4c70-8051-a580aed61e46 NetNS:/var/run/netns/13c1ed02-5261-4455-806e-6f107cb1bc52 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000898570}] Aliases:map[]}"
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.956904391Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.9658716Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:307dcc9e89198ab0c0020c5c001001f4e6e358c568f77f293a904277cd9c4fbe UID:bc7a543f-c0f3-4c70-8051-a580aed61e46 NetNS:/var/run/netns/13c1ed02-5261-4455-806e-6f107cb1bc52 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000898570}] Aliases:map[]}"
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.965979296Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.966724166Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.96753306Z" level=info msg="Ran pod sandbox 307dcc9e89198ab0c0020c5c001001f4e6e358c568f77f293a904277cd9c4fbe with infra container: default/busybox/POD" id=a11a3507-c003-49a7-9091-7deb18f331f9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.968645283Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=470bf426-bb7a-4780-b5d4-a23a0b686225 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.968766565Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=470bf426-bb7a-4780-b5d4-a23a0b686225 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.968810597Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=470bf426-bb7a-4780-b5d4-a23a0b686225 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.969348814Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=20c534a1-5561-4872-aac9-6c9634f2bfd9 name=/runtime.v1.ImageService/PullImage
	Dec 16 04:27:28 addons-920118 crio[777]: time="2025-12-16T04:27:28.970675722Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.201757086Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=20c534a1-5561-4872-aac9-6c9634f2bfd9 name=/runtime.v1.ImageService/PullImage
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.202378443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2c49fef3-18cc-42aa-bbb0-9d01a2c4a851 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.203699836Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d379282f-2c2d-4e75-b76b-77b8be67e7ef name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.207042528Z" level=info msg="Creating container: default/busybox/busybox" id=d3a8196a-10e8-4ab8-b985-dd5acf4436b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.207152542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.212185499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.212621975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.239144568Z" level=info msg="Created container de66658adcaed8ab80fcb6ab17949a43b3b8ca0587452dcbba49adb882e1c853: default/busybox/busybox" id=d3a8196a-10e8-4ab8-b985-dd5acf4436b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.239678203Z" level=info msg="Starting container: de66658adcaed8ab80fcb6ab17949a43b3b8ca0587452dcbba49adb882e1c853" id=443635d3-c214-44c9-a329-156738ab2001 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 04:27:30 addons-920118 crio[777]: time="2025-12-16T04:27:30.241411974Z" level=info msg="Started container" PID=6229 containerID=de66658adcaed8ab80fcb6ab17949a43b3b8ca0587452dcbba49adb882e1c853 description=default/busybox/busybox id=443635d3-c214-44c9-a329-156738ab2001 name=/runtime.v1.RuntimeService/StartContainer sandboxID=307dcc9e89198ab0c0020c5c001001f4e6e358c568f77f293a904277cd9c4fbe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	de66658adcaed       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   307dcc9e89198       busybox                                     default
	7003e956af0ad       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	7155e643211dd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 13 seconds ago       Running             gcp-auth                                 0                   f7ecbaf8fd130       gcp-auth-78565c9fb4-nb8d6                   gcp-auth
	8aff65da11b97       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	24531ebc7ad25       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             15 seconds ago       Exited              patch                                    2                   e3cbbaf8bd1c0       gcp-auth-certs-patch-j76z9                  gcp-auth
	020f3f05b4049       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	83f26bc5ed030       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 seconds ago       Running             hostpath                                 0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	bb8f922f85ca6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            18 seconds ago       Running             gadget                                   0                   5509c148fc0bc       gadget-76954                                gadget
	086cde64476bc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                20 seconds ago       Running             node-driver-registrar                    0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	ac8adb18f5d79       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             20 seconds ago       Running             controller                               0                   1c925d2e74753       ingress-nginx-controller-85d4c799dd-zqp9n   ingress-nginx
	630c8b3279b37       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   ccc20420d744f       registry-proxy-sh6dp                        kube-system
	3c77fede48641       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     26 seconds ago       Running             amd-gpu-device-plugin                    0                   bfa53a42fbc08       amd-gpu-device-plugin-kffg9                 kube-system
	89e5c7bfccb0e       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             26 seconds ago       Exited              patch                                    2                   8421a491089dc       ingress-nginx-admission-patch-wgjhv         ingress-nginx
	e71caaec714d2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   27 seconds ago       Running             csi-external-health-monitor-controller   0                   cdf6d96b697c3       csi-hostpathplugin-sm7qv                    kube-system
	6f5d9aa495070       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   28 seconds ago       Exited              create                                   0                   d193f3f814a80       gcp-auth-certs-create-rk2d7                 gcp-auth
	b9a013f9f0f76       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              28 seconds ago       Running             csi-resizer                              0                   ed31ee48618f9       csi-hostpath-resizer-0                      kube-system
	42eda45aa69e5       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     29 seconds ago       Running             nvidia-device-plugin-ctr                 0                   74fcbb01d0d70       nvidia-device-plugin-daemonset-82xlt        kube-system
	65be2e67b7994       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   32 seconds ago       Exited              create                                   0                   a84a696c37162       ingress-nginx-admission-create-dhdgp        ingress-nginx
	e4e430d943bdb       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               32 seconds ago       Running             minikube-ingress-dns                     0                   f58ae28d123e7       kube-ingress-dns-minikube                   kube-system
	1487414fac27e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      37 seconds ago       Running             volume-snapshot-controller               0                   31d52390e7592       snapshot-controller-7d9fbc56b8-xnx2s        kube-system
	1aca72e751ef6       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        37 seconds ago       Running             metrics-server                           0                   12d003ec00240       metrics-server-85b7d694d7-49lmv             kube-system
	7bbeac49d29ea       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           38 seconds ago       Running             registry                                 0                   2325dd9007fc9       registry-6b586f9694-76pzp                   kube-system
	3ba35a512da89       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             39 seconds ago       Running             csi-attacher                             0                   acb5a720f68d4       csi-hostpath-attacher-0                     kube-system
	4176f3267a36c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago       Running             volume-snapshot-controller               0                   67be75d6e474c       snapshot-controller-7d9fbc56b8-7jwq5        kube-system
	45baa7e28fe04       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             42 seconds ago       Running             local-path-provisioner                   0                   3ce5fb735f966       local-path-provisioner-648f6765c9-m6ntg     local-path-storage
	c54e6bce45489       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              42 seconds ago       Running             yakd                                     0                   8555cbb7a3371       yakd-dashboard-5ff678cb9-phnzx              yakd-dashboard
	2d841e7838e4f       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               44 seconds ago       Running             cloud-spanner-emulator                   0                   1c16a2385a72e       cloud-spanner-emulator-5bdddb765-p2sxn      default
	fe8ccf3878016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             47 seconds ago       Running             storage-provisioner                      0                   ac7472e34770a       storage-provisioner                         kube-system
	157d004cac3b9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             47 seconds ago       Running             coredns                                  0                   04d847ef7710b       coredns-66bc5c9577-znpvj                    kube-system
	2b8f11e422be8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   50c4d0f12aacb       kindnet-grqmf                               kube-system
	20fa3ad645ed1       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   1ed495a721103       kube-proxy-8wpnk                            kube-system
	c71b691cde317       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   9c98c42e00b7f       kube-scheduler-addons-920118                kube-system
	32eff47c7448a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   736679ee4953e       kube-apiserver-addons-920118                kube-system
	0cdc3c36bf12c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   64a56643ef1f8       kube-controller-manager-addons-920118       kube-system
	6973498efaf26       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   f0a587b7070a2       etcd-addons-920118                          kube-system
	
	
	==> coredns [157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e] <==
	[INFO] 10.244.0.17:53608 - 20376 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149172s
	[INFO] 10.244.0.17:50436 - 29525 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010393s
	[INFO] 10.244.0.17:50436 - 29235 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135255s
	[INFO] 10.244.0.17:51502 - 64490 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000085405s
	[INFO] 10.244.0.17:51502 - 64150 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.0000725s
	[INFO] 10.244.0.17:36556 - 58130 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000058284s
	[INFO] 10.244.0.17:36556 - 57797 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000081877s
	[INFO] 10.244.0.17:35172 - 29615 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00005987s
	[INFO] 10.244.0.17:35172 - 29409 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000038785s
	[INFO] 10.244.0.17:34024 - 26472 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094311s
	[INFO] 10.244.0.17:34024 - 26685 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000131809s
	[INFO] 10.244.0.22:35580 - 28333 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174596s
	[INFO] 10.244.0.22:52213 - 5537 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245838s
	[INFO] 10.244.0.22:57904 - 46393 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127965s
	[INFO] 10.244.0.22:47955 - 41041 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167296s
	[INFO] 10.244.0.22:54228 - 4443 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119943s
	[INFO] 10.244.0.22:47166 - 58683 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143761s
	[INFO] 10.244.0.22:50862 - 13086 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006321971s
	[INFO] 10.244.0.22:48096 - 47704 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007139357s
	[INFO] 10.244.0.22:48913 - 17181 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004469334s
	[INFO] 10.244.0.22:36718 - 43958 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008461629s
	[INFO] 10.244.0.22:34871 - 53772 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004295477s
	[INFO] 10.244.0.22:53396 - 59322 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005290741s
	[INFO] 10.244.0.22:50314 - 57704 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00105588s
	[INFO] 10.244.0.22:38572 - 8896 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001274948s
	
	
	==> describe nodes <==
	Name:               addons-920118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-920118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=addons-920118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T04_26_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-920118
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-920118"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 04:26:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-920118
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 04:27:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 04:27:14 +0000   Tue, 16 Dec 2025 04:25:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 04:27:14 +0000   Tue, 16 Dec 2025 04:25:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 04:27:14 +0000   Tue, 16 Dec 2025 04:25:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 04:27:14 +0000   Tue, 16 Dec 2025 04:26:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-920118
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                1aa1cdd3-ea31-4f68-9e9e-dbc03ee47c3c
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-p2sxn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-76954                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  gcp-auth                    gcp-auth-78565c9fb4-nb8d6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-zqp9n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         88s
	  kube-system                 amd-gpu-device-plugin-kffg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-66bc5c9577-znpvj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpathplugin-sm7qv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 etcd-addons-920118                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         95s
	  kube-system                 kindnet-grqmf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-addons-920118                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-addons-920118        200m (2%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-8wpnk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-addons-920118                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 metrics-server-85b7d694d7-49lmv              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         88s
	  kube-system                 nvidia-device-plugin-daemonset-82xlt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 registry-6b586f9694-76pzp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-creds-764b6fb674-rblqj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-sh6dp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 snapshot-controller-7d9fbc56b8-7jwq5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 snapshot-controller-7d9fbc56b8-xnx2s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  local-path-storage          local-path-provisioner-648f6765c9-m6ntg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-phnzx               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 87s   kube-proxy       
	  Normal  Starting                 95s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s   kubelet          Node addons-920118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s   kubelet          Node addons-920118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s   kubelet          Node addons-920118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s   node-controller  Node addons-920118 event: Registered Node addons-920118 in Controller
	  Normal  NodeReady                48s   kubelet          Node addons-920118 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 04:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001802] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.386059] i8042: Warning: Keylock active
	[  +0.012131] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494322] block sda: the capability attribute has been deprecated.
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391] <==
	{"level":"warn","ts":"2025-12-16T04:26:00.489553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.497039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.504795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.512106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.520573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.530684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.537076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.545921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.552575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.560689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.567366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.573666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.579746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.600465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.607972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.614759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:00.657940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:11.530595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:11.537749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.092428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.099300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.109692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:26:38.116645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58722","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T04:27:13.830310Z","caller":"traceutil/trace.go:172","msg":"trace[317146620] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"145.290515ms","start":"2025-12-16T04:27:13.684997Z","end":"2025-12-16T04:27:13.830287Z","steps":["trace[317146620] 'process raft request'  (duration: 71.106735ms)","trace[317146620] 'compare'  (duration: 74.086135ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:27:22.650076Z","caller":"traceutil/trace.go:172","msg":"trace[1727911472] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"102.929324ms","start":"2025-12-16T04:27:22.547120Z","end":"2025-12-16T04:27:22.650049Z","steps":["trace[1727911472] 'process raft request'  (duration: 62.883528ms)","trace[1727911472] 'compare'  (duration: 39.896633ms)"],"step_count":2}
	
	
	==> gcp-auth [7155e643211ddd152b4f57f80440213bcc24fb8523defe39dd525466f92bbde2] <==
	2025/12/16 04:27:24 GCP Auth Webhook started!
	2025/12/16 04:27:28 Ready to marshal response ...
	2025/12/16 04:27:28 Ready to write response ...
	2025/12/16 04:27:28 Ready to marshal response ...
	2025/12/16 04:27:28 Ready to write response ...
	2025/12/16 04:27:28 Ready to marshal response ...
	2025/12/16 04:27:28 Ready to write response ...
	
	
	==> kernel <==
	 04:27:38 up 10 min,  0 user,  load average: 2.17, 1.02, 0.39
	Linux addons-920118 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a] <==
	I1216 04:26:09.964443       1 main.go:148] setting mtu 1500 for CNI 
	I1216 04:26:09.964465       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 04:26:09.964495       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T04:26:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 04:26:10.190116       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 04:26:10.190512       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 04:26:10.190576       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 04:26:10.191005       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 04:26:40.188819       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 04:26:40.188821       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1216 04:26:40.190088       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1216 04:26:40.255517       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1216 04:26:41.691633       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 04:26:41.691661       1 metrics.go:72] Registering metrics
	I1216 04:26:41.691748       1 controller.go:711] "Syncing nftables rules"
	I1216 04:26:50.190469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:26:50.190530       1 main.go:301] handling current node
	I1216 04:27:00.187737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:27:00.187775       1 main.go:301] handling current node
	I1216 04:27:10.187150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:27:10.187191       1 main.go:301] handling current node
	I1216 04:27:20.188117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:27:20.188171       1 main.go:301] handling current node
	I1216 04:27:30.187237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:27:30.187278       1 main.go:301] handling current node
	
	
	==> kube-apiserver [32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9] <==
	I1216 04:26:17.332845       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.255.203"}
	W1216 04:26:38.092334       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 04:26:38.099316       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 04:26:38.109678       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1216 04:26:38.116584       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 04:26:50.486472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.486510       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:26:50.486549       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.486596       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:26:50.504706       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.504741       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:26:50.506406       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.255.203:443: connect: connection refused
	E1216 04:26:50.506444       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.255.203:443: connect: connection refused" logger="UnhandledError"
	W1216 04:27:02.644088       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:27:02.644165       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1216 04:27:02.644206       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	E1216 04:27:02.646059       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	E1216 04:27:02.651688       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	E1216 04:27:02.672401       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.232.35:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.232.35:443: connect: connection refused" logger="UnhandledError"
	I1216 04:27:02.748109       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 04:27:36.782867       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36302: use of closed network connection
	E1216 04:27:36.923871       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36336: use of closed network connection
	
	
	==> kube-controller-manager [0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093] <==
	I1216 04:26:08.073376       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 04:26:08.073455       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 04:26:08.073496       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 04:26:08.073521       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 04:26:08.073534       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 04:26:08.073543       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 04:26:08.073518       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 04:26:08.073519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 04:26:08.073524       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 04:26:08.073461       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 04:26:08.074836       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 04:26:08.074863       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 04:26:08.075376       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 04:26:08.081114       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 04:26:08.082238       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:26:08.092515       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1216 04:26:10.337514       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 04:26:38.086987       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 04:26:38.087134       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 04:26:38.087167       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 04:26:38.099221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 04:26:38.102486       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 04:26:38.187410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:26:38.203655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:26:53.028104       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889] <==
	I1216 04:26:10.332044       1 server_linux.go:53] "Using iptables proxy"
	I1216 04:26:10.440595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 04:26:10.541322       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 04:26:10.541355       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 04:26:10.541437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 04:26:10.562530       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 04:26:10.562659       1 server_linux.go:132] "Using iptables Proxier"
	I1216 04:26:10.569808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 04:26:10.575418       1 server.go:527] "Version info" version="v1.34.2"
	I1216 04:26:10.575462       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 04:26:10.578650       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 04:26:10.578849       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 04:26:10.578649       1 config.go:200] "Starting service config controller"
	I1216 04:26:10.578686       1 config.go:106] "Starting endpoint slice config controller"
	I1216 04:26:10.579195       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 04:26:10.579190       1 config.go:309] "Starting node config controller"
	I1216 04:26:10.579301       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 04:26:10.579332       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 04:26:10.579300       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 04:26:10.679488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 04:26:10.679503       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 04:26:10.679522       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1] <==
	I1216 04:26:01.232343       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 04:26:01.234856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 04:26:01.235093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 04:26:01.238007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 04:26:01.238260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 04:26:01.238159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 04:26:01.238238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:26:01.238359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:26:01.238414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:26:01.238457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:26:01.238462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:26:01.238158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:26:01.238374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:26:01.239105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 04:26:01.239106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:26:01.239214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:26:01.239218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 04:26:01.239319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:26:01.239351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:26:01.239387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:26:02.095760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:26:02.127669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:26:02.161042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:26:02.205957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1216 04:26:02.730927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 04:27:13 addons-920118 kubelet[1286]: I1216 04:27:13.678434    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kffg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:27:14 addons-920118 kubelet[1286]: I1216 04:27:14.089852    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lx6d\" (UniqueName: \"kubernetes.io/projected/04800015-7ad0-4bad-8b0d-9a8daeb09fa2-kube-api-access-9lx6d\") pod \"04800015-7ad0-4bad-8b0d-9a8daeb09fa2\" (UID: \"04800015-7ad0-4bad-8b0d-9a8daeb09fa2\") "
	Dec 16 04:27:14 addons-920118 kubelet[1286]: I1216 04:27:14.093066    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04800015-7ad0-4bad-8b0d-9a8daeb09fa2-kube-api-access-9lx6d" (OuterVolumeSpecName: "kube-api-access-9lx6d") pod "04800015-7ad0-4bad-8b0d-9a8daeb09fa2" (UID: "04800015-7ad0-4bad-8b0d-9a8daeb09fa2"). InnerVolumeSpecName "kube-api-access-9lx6d". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 04:27:14 addons-920118 kubelet[1286]: I1216 04:27:14.190819    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lx6d\" (UniqueName: \"kubernetes.io/projected/04800015-7ad0-4bad-8b0d-9a8daeb09fa2-kube-api-access-9lx6d\") on node \"addons-920118\" DevicePath \"\""
	Dec 16 04:27:14 addons-920118 kubelet[1286]: I1216 04:27:14.684311    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-sh6dp" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:27:14 addons-920118 kubelet[1286]: I1216 04:27:14.688720    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8421a491089dcfc19e889125851c017bb87803b8c343a02dc2b5632d84ff7b48"
	Dec 16 04:27:14 addons-920118 kubelet[1286]: I1216 04:27:14.849272    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-sh6dp" podStartSLOduration=1.788851792 podStartE2EDuration="24.849247226s" podCreationTimestamp="2025-12-16 04:26:50 +0000 UTC" firstStartedPulling="2025-12-16 04:26:51.001059381 +0000 UTC m=+47.649316204" lastFinishedPulling="2025-12-16 04:27:14.061454828 +0000 UTC m=+70.709711638" observedRunningTime="2025-12-16 04:27:14.708197051 +0000 UTC m=+71.356453876" watchObservedRunningTime="2025-12-16 04:27:14.849247226 +0000 UTC m=+71.497504053"
	Dec 16 04:27:15 addons-920118 kubelet[1286]: I1216 04:27:15.694420    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-sh6dp" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:27:17 addons-920118 kubelet[1286]: I1216 04:27:17.718472    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-zqp9n" podStartSLOduration=56.724516818 podStartE2EDuration="1m7.718453041s" podCreationTimestamp="2025-12-16 04:26:10 +0000 UTC" firstStartedPulling="2025-12-16 04:27:06.444354384 +0000 UTC m=+63.092611206" lastFinishedPulling="2025-12-16 04:27:17.438290613 +0000 UTC m=+74.086547429" observedRunningTime="2025-12-16 04:27:17.717426221 +0000 UTC m=+74.365683048" watchObservedRunningTime="2025-12-16 04:27:17.718453041 +0000 UTC m=+74.366709868"
	Dec 16 04:27:20 addons-920118 kubelet[1286]: I1216 04:27:20.734262    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-76954" podStartSLOduration=64.832074509 podStartE2EDuration="1m10.734232552s" podCreationTimestamp="2025-12-16 04:26:10 +0000 UTC" firstStartedPulling="2025-12-16 04:27:14.254331481 +0000 UTC m=+70.902588299" lastFinishedPulling="2025-12-16 04:27:20.156489513 +0000 UTC m=+76.804746342" observedRunningTime="2025-12-16 04:27:20.733533836 +0000 UTC m=+77.381790663" watchObservedRunningTime="2025-12-16 04:27:20.734232552 +0000 UTC m=+77.382489379"
	Dec 16 04:27:21 addons-920118 kubelet[1286]: I1216 04:27:21.482058    1286 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 16 04:27:21 addons-920118 kubelet[1286]: I1216 04:27:21.482106    1286 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 16 04:27:22 addons-920118 kubelet[1286]: E1216 04:27:22.357794    1286 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 16 04:27:22 addons-920118 kubelet[1286]: E1216 04:27:22.357881    1286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0738a306-d479-46ae-bb3b-4e49e6bc1476-gcr-creds podName:0738a306-d479-46ae-bb3b-4e49e6bc1476 nodeName:}" failed. No retries permitted until 2025-12-16 04:27:54.357862041 +0000 UTC m=+111.006118861 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0738a306-d479-46ae-bb3b-4e49e6bc1476-gcr-creds") pod "registry-creds-764b6fb674-rblqj" (UID: "0738a306-d479-46ae-bb3b-4e49e6bc1476") : secret "registry-creds-gcr" not found
	Dec 16 04:27:22 addons-920118 kubelet[1286]: I1216 04:27:22.430947    1286 scope.go:117] "RemoveContainer" containerID="c89e35a768512908f43c1edad1fb85f868bd09f1eaf83366ae4068bebd11636f"
	Dec 16 04:27:22 addons-920118 kubelet[1286]: I1216 04:27:22.732344    1286 scope.go:117] "RemoveContainer" containerID="c89e35a768512908f43c1edad1fb85f868bd09f1eaf83366ae4068bebd11636f"
	Dec 16 04:27:23 addons-920118 kubelet[1286]: I1216 04:27:23.970349    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s47n7\" (UniqueName: \"kubernetes.io/projected/e648ee9a-4792-42ef-b0a3-c3aafda2e47a-kube-api-access-s47n7\") pod \"e648ee9a-4792-42ef-b0a3-c3aafda2e47a\" (UID: \"e648ee9a-4792-42ef-b0a3-c3aafda2e47a\") "
	Dec 16 04:27:23 addons-920118 kubelet[1286]: I1216 04:27:23.972523    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e648ee9a-4792-42ef-b0a3-c3aafda2e47a-kube-api-access-s47n7" (OuterVolumeSpecName: "kube-api-access-s47n7") pod "e648ee9a-4792-42ef-b0a3-c3aafda2e47a" (UID: "e648ee9a-4792-42ef-b0a3-c3aafda2e47a"). InnerVolumeSpecName "kube-api-access-s47n7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 04:27:24 addons-920118 kubelet[1286]: I1216 04:27:24.071895    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s47n7\" (UniqueName: \"kubernetes.io/projected/e648ee9a-4792-42ef-b0a3-c3aafda2e47a-kube-api-access-s47n7\") on node \"addons-920118\" DevicePath \"\""
	Dec 16 04:27:24 addons-920118 kubelet[1286]: I1216 04:27:24.750378    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3cbbaf8bd1c0d598f16b0165f5c9ef611224cf15d119d4d79b0e9849bb60e18"
	Dec 16 04:27:24 addons-920118 kubelet[1286]: I1216 04:27:24.762171    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-nb8d6" podStartSLOduration=65.989195538 podStartE2EDuration="1m7.762152406s" podCreationTimestamp="2025-12-16 04:26:17 +0000 UTC" firstStartedPulling="2025-12-16 04:27:22.694044291 +0000 UTC m=+79.342301110" lastFinishedPulling="2025-12-16 04:27:24.467001157 +0000 UTC m=+81.115257978" observedRunningTime="2025-12-16 04:27:24.761249312 +0000 UTC m=+81.409506139" watchObservedRunningTime="2025-12-16 04:27:24.762152406 +0000 UTC m=+81.410409236"
	Dec 16 04:27:25 addons-920118 kubelet[1286]: I1216 04:27:25.770802    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-sm7qv" podStartSLOduration=1.414188061 podStartE2EDuration="35.770780815s" podCreationTimestamp="2025-12-16 04:26:50 +0000 UTC" firstStartedPulling="2025-12-16 04:26:50.925746628 +0000 UTC m=+47.574003440" lastFinishedPulling="2025-12-16 04:27:25.282339386 +0000 UTC m=+81.930596194" observedRunningTime="2025-12-16 04:27:25.769731401 +0000 UTC m=+82.417988228" watchObservedRunningTime="2025-12-16 04:27:25.770780815 +0000 UTC m=+82.419037642"
	Dec 16 04:27:28 addons-920118 kubelet[1286]: I1216 04:27:28.706067    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bc7a543f-c0f3-4c70-8051-a580aed61e46-gcp-creds\") pod \"busybox\" (UID: \"bc7a543f-c0f3-4c70-8051-a580aed61e46\") " pod="default/busybox"
	Dec 16 04:27:28 addons-920118 kubelet[1286]: I1216 04:27:28.706150    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x582\" (UniqueName: \"kubernetes.io/projected/bc7a543f-c0f3-4c70-8051-a580aed61e46-kube-api-access-5x582\") pod \"busybox\" (UID: \"bc7a543f-c0f3-4c70-8051-a580aed61e46\") " pod="default/busybox"
	Dec 16 04:27:30 addons-920118 kubelet[1286]: I1216 04:27:30.787443    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5533238150000002 podStartE2EDuration="2.787424713s" podCreationTimestamp="2025-12-16 04:27:28 +0000 UTC" firstStartedPulling="2025-12-16 04:27:28.969079264 +0000 UTC m=+85.617336082" lastFinishedPulling="2025-12-16 04:27:30.20318017 +0000 UTC m=+86.851436980" observedRunningTime="2025-12-16 04:27:30.78699479 +0000 UTC m=+87.435251617" watchObservedRunningTime="2025-12-16 04:27:30.787424713 +0000 UTC m=+87.435681543"
	
	
	==> storage-provisioner [fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021] <==
	W1216 04:27:13.065662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:15.069524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:15.077149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:17.080720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:17.090353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:19.093401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:19.155876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:21.158533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:21.162923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:23.166183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:23.170786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:25.173578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:25.177644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:27.180425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:27.183675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:29.186216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:29.189575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:31.192648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:31.196049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:33.198324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:33.203209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:35.205924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:35.211296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:37.214896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:27:37.218774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-920118 -n addons-920118
helpers_test.go:270: (dbg) Run:  kubectl --context addons-920118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-create-rk2d7 gcp-auth-certs-patch-j76z9 ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv registry-creds-764b6fb674-rblqj
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-920118 describe pod gcp-auth-certs-create-rk2d7 gcp-auth-certs-patch-j76z9 ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv registry-creds-764b6fb674-rblqj
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-920118 describe pod gcp-auth-certs-create-rk2d7 gcp-auth-certs-patch-j76z9 ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv registry-creds-764b6fb674-rblqj: exit status 1 (59.462801ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-rk2d7" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-j76z9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-dhdgp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wgjhv" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rblqj" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-920118 describe pod gcp-auth-certs-create-rk2d7 gcp-auth-certs-patch-j76z9 ingress-nginx-admission-create-dhdgp ingress-nginx-admission-patch-wgjhv registry-creds-764b6fb674-rblqj: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable headlamp --alsologtostderr -v=1: exit status 11 (241.362596ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:39.448329   19317 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:39.448602   19317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:39.448613   19317 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:39.448617   19317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:39.448779   19317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:39.449046   19317 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:39.449379   19317 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:39.449399   19317 addons.go:622] checking whether the cluster is paused
	I1216 04:27:39.449481   19317 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:39.449492   19317 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:39.449849   19317 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:39.468046   19317 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:39.468103   19317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:39.486416   19317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:39.579520   19317 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:39.579605   19317 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:39.608356   19317 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:39.608379   19317 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:39.608384   19317 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:39.608388   19317 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:39.608391   19317 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:39.608394   19317 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:39.608397   19317 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:39.608400   19317 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:39.608403   19317 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:39.608409   19317 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:39.608414   19317 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:39.608418   19317 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:39.608427   19317 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:39.608432   19317 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:39.608439   19317 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:39.608455   19317 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:39.608464   19317 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:39.608470   19317 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:39.608475   19317 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:39.608479   19317 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:39.608484   19317 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:39.608487   19317 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:39.608490   19317 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:39.608493   19317 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:39.608500   19317 cri.go:89] found id: ""
	I1216 04:27:39.608545   19317 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:39.623299   19317 out.go:203] 
	W1216 04:27:39.624307   19317 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:39.624329   19317 out.go:285] * 
	* 
	W1216 04:27:39.627176   19317 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:39.628274   19317 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-p2sxn" [4c294149-db16-4c45-9aba-f1cdc9dadcd5] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003150362s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (230.099423ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:28:01.617278   21915 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:28:01.617556   21915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:01.617567   21915 out.go:374] Setting ErrFile to fd 2...
	I1216 04:28:01.617573   21915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:01.617772   21915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:28:01.618062   21915 mustload.go:66] Loading cluster: addons-920118
	I1216 04:28:01.618394   21915 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:01.618418   21915 addons.go:622] checking whether the cluster is paused
	I1216 04:28:01.618529   21915 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:01.618545   21915 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:28:01.618919   21915 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:28:01.635959   21915 ssh_runner.go:195] Run: systemctl --version
	I1216 04:28:01.636047   21915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:28:01.653420   21915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:28:01.743512   21915 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:28:01.743572   21915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:28:01.771995   21915 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:28:01.772048   21915 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:28:01.772063   21915 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:28:01.772068   21915 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:28:01.772073   21915 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:28:01.772079   21915 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:28:01.772083   21915 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:28:01.772089   21915 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:28:01.772093   21915 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:28:01.772120   21915 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:28:01.772130   21915 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:28:01.772134   21915 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:28:01.772139   21915 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:28:01.772147   21915 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:28:01.772152   21915 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:28:01.772168   21915 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:28:01.772177   21915 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:28:01.772184   21915 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:28:01.772188   21915 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:28:01.772193   21915 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:28:01.772201   21915 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:28:01.772206   21915 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:28:01.772213   21915 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:28:01.772218   21915 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:28:01.772225   21915 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:28:01.772230   21915 cri.go:89] found id: ""
	I1216 04:28:01.772284   21915 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:28:01.785936   21915 out.go:203] 
	W1216 04:28:01.786934   21915 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:28:01.786956   21915 out.go:285] * 
	* 
	W1216 04:28:01.789905   21915 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:28:01.791067   21915 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-920118 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-920118 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-920118 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [0a6aba2c-4980-419d-b52a-63f7c325f5b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [0a6aba2c-4980-419d-b52a-63f7c325f5b0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [0a6aba2c-4980-419d-b52a-63f7c325f5b0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00336539s
addons_test.go:969: (dbg) Run:  kubectl --context addons-920118 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 ssh "cat /opt/local-path-provisioner/pvc-483bbdf6-b041-40f8-a75d-b907e63bd548_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-920118 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-920118 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (240.516879ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:56.021822   21577 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:56.022004   21577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:56.022031   21577 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:56.022040   21577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:56.022329   21577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:56.022678   21577 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:56.023203   21577 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:56.023232   21577 addons.go:622] checking whether the cluster is paused
	I1216 04:27:56.023371   21577 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:56.023394   21577 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:56.023772   21577 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:56.042384   21577 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:56.042431   21577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:56.058852   21577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:56.149318   21577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:56.149395   21577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:56.179437   21577 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:27:56.179472   21577 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:56.179479   21577 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:56.179484   21577 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:56.179490   21577 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:56.179495   21577 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:56.179499   21577 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:56.179504   21577 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:56.179508   21577 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:56.179516   21577 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:56.179524   21577 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:56.179529   21577 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:56.179536   21577 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:56.179541   21577 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:56.179548   21577 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:56.179557   21577 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:56.179563   21577 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:56.179566   21577 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:56.179569   21577 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:56.179571   21577 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:56.179576   21577 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:56.179581   21577 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:56.179584   21577 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:56.179587   21577 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:56.179591   21577 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:56.179596   21577 cri.go:89] found id: ""
	I1216 04:27:56.179632   21577 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:56.193909   21577 out.go:203] 
	W1216 04:27:56.195312   21577 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:56.195344   21577 out.go:285] * 
	* 
	W1216 04:27:56.198332   21577 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:56.200168   21577 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-82xlt" [3d756269-846a-420e-a800-f6ec604ae561] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003920426s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (234.164049ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:28:02.265100   21977 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:28:02.265414   21977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:02.265426   21977 out.go:374] Setting ErrFile to fd 2...
	I1216 04:28:02.265429   21977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:28:02.265696   21977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:28:02.266002   21977 mustload.go:66] Loading cluster: addons-920118
	I1216 04:28:02.266369   21977 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:02.266391   21977 addons.go:622] checking whether the cluster is paused
	I1216 04:28:02.266485   21977 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:28:02.266499   21977 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:28:02.266909   21977 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:28:02.284391   21977 ssh_runner.go:195] Run: systemctl --version
	I1216 04:28:02.284435   21977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:28:02.301453   21977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:28:02.392706   21977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:28:02.392787   21977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:28:02.421085   21977 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:28:02.421123   21977 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:28:02.421127   21977 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:28:02.421131   21977 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:28:02.421134   21977 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:28:02.421137   21977 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:28:02.421146   21977 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:28:02.421149   21977 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:28:02.421152   21977 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:28:02.421157   21977 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:28:02.421160   21977 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:28:02.421163   21977 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:28:02.421166   21977 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:28:02.421169   21977 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:28:02.421172   21977 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:28:02.421177   21977 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:28:02.421183   21977 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:28:02.421186   21977 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:28:02.421188   21977 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:28:02.421191   21977 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:28:02.421194   21977 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:28:02.421196   21977 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:28:02.421199   21977 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:28:02.421209   21977 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:28:02.421215   21977 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:28:02.421217   21977 cri.go:89] found id: ""
	I1216 04:28:02.421257   21977 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:28:02.435674   21977 out.go:203] 
	W1216 04:28:02.436924   21977 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:28:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:28:02.436945   21977 out.go:285] * 
	* 
	W1216 04:28:02.439933   21977 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:28:02.441363   21977 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-phnzx" [a459ac1e-8170-4cd2-a399-855fa751150b] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003380087s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable yakd --alsologtostderr -v=1: exit status 11 (234.168655ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:56.374970   21673 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:56.375294   21673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:56.375306   21673 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:56.375314   21673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:56.375497   21673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:56.375778   21673 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:56.376144   21673 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:56.376167   21673 addons.go:622] checking whether the cluster is paused
	I1216 04:27:56.376270   21673 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:56.376286   21673 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:56.376637   21673 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:56.395057   21673 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:56.395118   21673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:56.412790   21673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:56.504718   21673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:56.504793   21673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:56.533263   21673 cri.go:89] found id: "52cafa55de02b543343aba6b83ac3e9ffcbe29c2e6789d908423c29d79945e33"
	I1216 04:27:56.533292   21673 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:56.533296   21673 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:56.533300   21673 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:56.533303   21673 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:56.533307   21673 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:56.533310   21673 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:56.533312   21673 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:56.533315   21673 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:56.533331   21673 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:56.533334   21673 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:56.533337   21673 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:56.533340   21673 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:56.533343   21673 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:56.533346   21673 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:56.533357   21673 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:56.533363   21673 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:56.533368   21673 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:56.533370   21673 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:56.533373   21673 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:56.533376   21673 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:56.533378   21673 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:56.533381   21673 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:56.533383   21673 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:56.533386   21673 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:56.533388   21673 cri.go:89] found id: ""
	I1216 04:27:56.533434   21673 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:56.547388   21673 out.go:203] 
	W1216 04:27:56.548412   21673 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:56.548429   21673 out.go:285] * 
	* 
	W1216 04:27:56.551396   21673 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:56.552596   21673 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-kffg9" [f903d17e-1ed3-4e13-bca9-dfb3a51519d0] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003835825s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-920118 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-920118 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (251.13613ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:27:42.237243   19453 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:27:42.237367   19453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:42.237377   19453 out.go:374] Setting ErrFile to fd 2...
	I1216 04:27:42.237381   19453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:27:42.237569   19453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:27:42.237831   19453 mustload.go:66] Loading cluster: addons-920118
	I1216 04:27:42.238146   19453 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:42.238163   19453 addons.go:622] checking whether the cluster is paused
	I1216 04:27:42.238248   19453 config.go:182] Loaded profile config "addons-920118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:27:42.238261   19453 host.go:66] Checking if "addons-920118" exists ...
	I1216 04:27:42.238651   19453 cli_runner.go:164] Run: docker container inspect addons-920118 --format={{.State.Status}}
	I1216 04:27:42.259240   19453 ssh_runner.go:195] Run: systemctl --version
	I1216 04:27:42.259299   19453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-920118
	I1216 04:27:42.277785   19453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/addons-920118/id_rsa Username:docker}
	I1216 04:27:42.371349   19453 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:27:42.371445   19453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:27:42.399044   19453 cri.go:89] found id: "7003e956af0ad24810b3d4dd3a8e8c4cf1fd81218828bd6b07bc1038424f92d0"
	I1216 04:27:42.399065   19453 cri.go:89] found id: "8aff65da11b9743a900866476b45d7c879762e0f2171535bc615b6da418d3f86"
	I1216 04:27:42.399069   19453 cri.go:89] found id: "020f3f05b40499ec135c68ee37e10f0398132fe0eeee3711c26db1201de63c98"
	I1216 04:27:42.399072   19453 cri.go:89] found id: "83f26bc5ed0305effefa86febb47dcd45705725e6d317be7ab978115c69434bf"
	I1216 04:27:42.399075   19453 cri.go:89] found id: "086cde64476bc59ae6cbb4db7663780050e3ab41823fb0a2df1ed7e87e63e504"
	I1216 04:27:42.399078   19453 cri.go:89] found id: "630c8b3279b376ac6eda8f001024e661c8cee574031b117bd64e2d07dbef09c8"
	I1216 04:27:42.399081   19453 cri.go:89] found id: "3c77fede48641bc23cdd54751ca04996b95698b4015f883e8f3ed488e64bbd9d"
	I1216 04:27:42.399084   19453 cri.go:89] found id: "e71caaec714d26285d01ade72cb9e3c530acaf24a8a8314e7f570e292868a484"
	I1216 04:27:42.399087   19453 cri.go:89] found id: "b9a013f9f0f76e356eb3400cb9356f2c7aa441b5350e660e332c62fc23c7bb07"
	I1216 04:27:42.399105   19453 cri.go:89] found id: "42eda45aa69e59477a03a296ee80f5d1dd1ca777a4819a9fa675d7b116a7c57e"
	I1216 04:27:42.399108   19453 cri.go:89] found id: "e4e430d943bdb41a0d9c105f653de74d2e468a8cebeb5decad44a5da881b17b0"
	I1216 04:27:42.399111   19453 cri.go:89] found id: "1487414fac27e3f40bc3373fda99dc6ee158f053127db508ae49e9b6d8a94c49"
	I1216 04:27:42.399114   19453 cri.go:89] found id: "1aca72e751ef646b16dc35da92f4e1260c65cb3f64cedc50e20a1e97bead704d"
	I1216 04:27:42.399117   19453 cri.go:89] found id: "7bbeac49d29eab985287e8ec8357dcd821c9232bc112703c32b39487ea48cfe7"
	I1216 04:27:42.399120   19453 cri.go:89] found id: "3ba35a512da898ef55afa50243aec4ce2ef1a39e42758cb4babb45ccda3f8b9a"
	I1216 04:27:42.399134   19453 cri.go:89] found id: "4176f3267a36cebba6a727b77bcf9158a7e5ea5a219ad7d20e36e49a41a10489"
	I1216 04:27:42.399141   19453 cri.go:89] found id: "fe8ccf3878016c20fc672ad7347c3e552cc015fb7cb5e11323085bb0623c7021"
	I1216 04:27:42.399146   19453 cri.go:89] found id: "157d004cac3b92b89ffd535b78937c5c5f4f9d3b4b381fc6f1daa82a6d46945e"
	I1216 04:27:42.399149   19453 cri.go:89] found id: "2b8f11e422be8ce28ba64cd88f52d9d9b87350c6869f9890fbbac8626248b98a"
	I1216 04:27:42.399152   19453 cri.go:89] found id: "20fa3ad645ed1e8c3d4eafa579c1f12952549905cdf9e50a50c7dd1787d21889"
	I1216 04:27:42.399157   19453 cri.go:89] found id: "c71b691cde3171d799abee0d44f1594a457889d5ada29c4ac201236f8efb31b1"
	I1216 04:27:42.399160   19453 cri.go:89] found id: "32eff47c7448aa9f4500cdcd01d1743faa00c0f8eaf3364ed71d5ea02e1d8bc9"
	I1216 04:27:42.399162   19453 cri.go:89] found id: "0cdc3c36bf12cd20ebf4121678a580e26a27cb3a7867fec632a248e22e93c093"
	I1216 04:27:42.399165   19453 cri.go:89] found id: "6973498efaf266a41c9aeba5b6073a14b41fa06611f941dc7802617340bf7391"
	I1216 04:27:42.399167   19453 cri.go:89] found id: ""
	I1216 04:27:42.399204   19453 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:27:42.412868   19453 out.go:203] 
	W1216 04:27:42.413977   19453 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:27:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:27:42.413994   19453 out.go:285] * 
	* 
	W1216 04:27:42.416836   19453 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:27:42.418037   19453 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-920118 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 image ls --format short --alsologtostderr: (2.358067668s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767327 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767327 image ls --format short --alsologtostderr:
I1216 04:37:20.053655   66765 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:20.053754   66765 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:20.053761   66765 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:20.053768   66765 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:20.054071   66765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:37:20.054875   66765 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:20.055124   66765 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:20.055850   66765 cli_runner.go:164] Run: docker container inspect functional-767327 --format={{.State.Status}}
I1216 04:37:20.084599   66765 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:20.084738   66765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767327
I1216 04:37:20.112096   66765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-767327/id_rsa Username:docker}
I1216 04:37:20.222275   66765 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 04:37:22.254047   66765 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.031719529s)
W1216 04:37:22.254152   66765 cache_images.go:736] Failed to list images for profile functional-767327 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 04:37:22.251306    7248 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-16T04:37:22Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 image ls --format table --alsologtostderr: (2.287882931s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767327 image ls --format table --alsologtostderr:
┌───────┬─────┬──────────┬──────┐
│ IMAGE │ TAG │ IMAGE ID │ SIZE │
└───────┴─────┴──────────┴──────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767327 image ls --format table --alsologtostderr:
I1216 04:37:22.431771   67738 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:22.432034   67738 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:22.432040   67738 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:22.432046   67738 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:22.432359   67738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:37:22.433093   67738 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:22.433236   67738 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:22.433699   67738 cli_runner.go:164] Run: docker container inspect functional-767327 --format={{.State.Status}}
I1216 04:37:22.462314   67738 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:22.462382   67738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767327
I1216 04:37:22.492184   67738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-767327/id_rsa Username:docker}
I1216 04:37:22.591916   67738 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 04:37:24.621849   67738 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.029898786s)
W1216 04:37:24.621923   67738 cache_images.go:736] Failed to list images for profile functional-767327 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 04:37:24.619078    7491 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-16T04:37:24Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected │ registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 image ls --format json --alsologtostderr: (2.293272076s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767327 image ls --format json --alsologtostderr:
[]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767327 image ls --format json --alsologtostderr:
I1216 04:37:22.429900   67733 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:22.430053   67733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:22.430061   67733 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:22.430068   67733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:22.430348   67733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:37:22.431201   67733 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:22.431348   67733 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:22.431968   67733 cli_runner.go:164] Run: docker container inspect functional-767327 --format={{.State.Status}}
I1216 04:37:22.455130   67733 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:22.455200   67733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767327
I1216 04:37:22.487264   67733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-767327/id_rsa Username:docker}
I1216 04:37:22.589649   67733 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 04:37:24.620843   67733 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.031126543s)
W1216 04:37:24.620939   67733 cache_images.go:736] Failed to list images for profile functional-767327 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 04:37:24.618195    7488 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-16T04:37:24Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 image ls --format yaml --alsologtostderr: (2.298519576s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767327 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767327 image ls --format yaml --alsologtostderr:
I1216 04:37:20.132973   66809 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:20.133356   66809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:20.133410   66809 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:20.133428   66809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:20.133776   66809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:37:20.134541   66809 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:20.134772   66809 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:20.135304   66809 cli_runner.go:164] Run: docker container inspect functional-767327 --format={{.State.Status}}
I1216 04:37:20.160940   66809 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:20.161010   66809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767327
I1216 04:37:20.185915   66809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-767327/id_rsa Username:docker}
I1216 04:37:20.290727   66809 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 04:37:22.328551   66809 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.037723483s)
W1216 04:37:22.328655   66809 cache_images.go:736] Failed to list images for profile functional-767327 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1216 04:37:22.324269    7265 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-16T04:37:22Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (2.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.39s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-484990 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-484990 --output=json --user=testUser: exit status 80 (2.390370174s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c68d4adc-5c13-49e4-8ba0-a6b693b6efb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-484990 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b1bf2259-af09-475b-9dfd-44dfe6ee52a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T04:46:51Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"48fdd4e2-c058-4bf4-a09c-d17557370066","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-484990 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.24s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-484990 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-484990 --output=json --user=testUser: exit status 80 (2.239924927s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c7f8a76-f21a-4734-aae5-e8dc20accdd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-484990 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f03b7441-822e-410a-844e-7dd33c300b89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T04:46:53Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"31fee9eb-0a5c-448e-823f-288282aa68ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-484990 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.24s)

                                                
                                    
x
+
TestPause/serial/Pause (5.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-484421 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-484421 --alsologtostderr -v=5: exit status 80 (2.438236457s)

                                                
                                                
-- stdout --
	* Pausing node pause-484421 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:00:05.362772  216899 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:00:05.363046  216899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:00:05.363058  216899 out.go:374] Setting ErrFile to fd 2...
	I1216 05:00:05.363062  216899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:00:05.363363  216899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:00:05.363682  216899 out.go:368] Setting JSON to false
	I1216 05:00:05.363704  216899 mustload.go:66] Loading cluster: pause-484421
	I1216 05:00:05.364142  216899 config.go:182] Loaded profile config "pause-484421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:00:05.364544  216899 cli_runner.go:164] Run: docker container inspect pause-484421 --format={{.State.Status}}
	I1216 05:00:05.383996  216899 host.go:66] Checking if "pause-484421" exists ...
	I1216 05:00:05.384366  216899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:00:05.445864  216899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-16 05:00:05.43499581 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:00:05.446503  216899 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-484421 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 05:00:05.448552  216899 out.go:179] * Pausing node pause-484421 ... 
	I1216 05:00:05.449796  216899 host.go:66] Checking if "pause-484421" exists ...
	I1216 05:00:05.450056  216899 ssh_runner.go:195] Run: systemctl --version
	I1216 05:00:05.450096  216899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 05:00:05.468347  216899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 05:00:05.563560  216899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:00:05.577710  216899 pause.go:52] kubelet running: true
	I1216 05:00:05.577773  216899 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:00:05.716243  216899 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:00:05.716325  216899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:00:05.783367  216899 cri.go:89] found id: "d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a"
	I1216 05:00:05.783394  216899 cri.go:89] found id: "968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba"
	I1216 05:00:05.783417  216899 cri.go:89] found id: "d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a"
	I1216 05:00:05.783422  216899 cri.go:89] found id: "ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764"
	I1216 05:00:05.783427  216899 cri.go:89] found id: "cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c"
	I1216 05:00:05.783435  216899 cri.go:89] found id: "58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18"
	I1216 05:00:05.783443  216899 cri.go:89] found id: "458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc"
	I1216 05:00:05.783448  216899 cri.go:89] found id: ""
	I1216 05:00:05.783489  216899 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:00:05.796080  216899 retry.go:31] will retry after 260.899659ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:00:05Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:00:06.057568  216899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:00:06.071034  216899 pause.go:52] kubelet running: false
	I1216 05:00:06.071099  216899 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:00:06.197277  216899 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:00:06.197355  216899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:00:06.265211  216899 cri.go:89] found id: "d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a"
	I1216 05:00:06.265235  216899 cri.go:89] found id: "968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba"
	I1216 05:00:06.265240  216899 cri.go:89] found id: "d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a"
	I1216 05:00:06.265243  216899 cri.go:89] found id: "ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764"
	I1216 05:00:06.265246  216899 cri.go:89] found id: "cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c"
	I1216 05:00:06.265249  216899 cri.go:89] found id: "58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18"
	I1216 05:00:06.265252  216899 cri.go:89] found id: "458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc"
	I1216 05:00:06.265254  216899 cri.go:89] found id: ""
	I1216 05:00:06.265327  216899 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:00:06.277452  216899 retry.go:31] will retry after 558.537288ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:00:06Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:00:06.836176  216899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:00:06.849295  216899 pause.go:52] kubelet running: false
	I1216 05:00:06.849359  216899 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:00:06.963480  216899 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:00:06.963575  216899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:00:07.028368  216899 cri.go:89] found id: "d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a"
	I1216 05:00:07.028394  216899 cri.go:89] found id: "968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba"
	I1216 05:00:07.028400  216899 cri.go:89] found id: "d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a"
	I1216 05:00:07.028405  216899 cri.go:89] found id: "ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764"
	I1216 05:00:07.028410  216899 cri.go:89] found id: "cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c"
	I1216 05:00:07.028415  216899 cri.go:89] found id: "58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18"
	I1216 05:00:07.028419  216899 cri.go:89] found id: "458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc"
	I1216 05:00:07.028423  216899 cri.go:89] found id: ""
	I1216 05:00:07.028469  216899 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:00:07.040688  216899 retry.go:31] will retry after 477.476542ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:00:07Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:00:07.518375  216899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:00:07.531559  216899 pause.go:52] kubelet running: false
	I1216 05:00:07.531616  216899 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:00:07.650115  216899 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:00:07.650211  216899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:00:07.715670  216899 cri.go:89] found id: "d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a"
	I1216 05:00:07.715691  216899 cri.go:89] found id: "968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba"
	I1216 05:00:07.715695  216899 cri.go:89] found id: "d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a"
	I1216 05:00:07.715698  216899 cri.go:89] found id: "ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764"
	I1216 05:00:07.715701  216899 cri.go:89] found id: "cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c"
	I1216 05:00:07.715705  216899 cri.go:89] found id: "58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18"
	I1216 05:00:07.715710  216899 cri.go:89] found id: "458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc"
	I1216 05:00:07.715714  216899 cri.go:89] found id: ""
	I1216 05:00:07.715757  216899 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:00:07.728946  216899 out.go:203] 
	W1216 05:00:07.730097  216899 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:00:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:00:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 05:00:07.730137  216899 out.go:285] * 
	* 
	W1216 05:00:07.733807  216899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:00:07.735863  216899 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-484421 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-484421
helpers_test.go:244: (dbg) docker inspect pause-484421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e",
	        "Created": "2025-12-16T04:58:45.466722374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195048,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:58:45.910170606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/hosts",
	        "LogPath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e-json.log",
	        "Name": "/pause-484421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-484421:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-484421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e",
	                "LowerDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-484421",
	                "Source": "/var/lib/docker/volumes/pause-484421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-484421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-484421",
	                "name.minikube.sigs.k8s.io": "pause-484421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fcdeaa3cb627a3ae98438e58cf5d3896559b464d98dff5dc2a58b7e7dadd39cb",
	            "SandboxKey": "/var/run/docker/netns/fcdeaa3cb627",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-484421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db2b418b51b2872ed7829d45469e7e2e3e75d46725a64a617e02cd49a4a698b0",
	                    "EndpointID": "efad74be823058e6c8cceabf3078d80a2f63e5c53ec08920f2f5a6a39d12484b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2e:e1:85:12:b8:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-484421",
	                        "84295152ef79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-484421 -n pause-484421
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-484421 -n pause-484421: exit status 2 (313.997515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-484421 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --cancel-scheduled                                                                                                     │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │ 16 Dec 25 04:57 UTC │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │ 16 Dec 25 04:57 UTC │
	│ delete  │ -p scheduled-stop-694840                                                                                                                        │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:58 UTC │
	│ start   │ -p insufficient-storage-446584 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                │ insufficient-storage-446584 │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │                     │
	│ delete  │ -p insufficient-storage-446584                                                                                                                  │ insufficient-storage-446584 │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:58 UTC │
	│ start   │ -p pause-484421 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-484421                │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p force-systemd-env-515540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                      │ force-systemd-env-515540    │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p offline-crio-464497 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                               │ offline-crio-464497         │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p stopped-upgrade-572334 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-572334      │ jenkins │ v1.35.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ delete  │ -p force-systemd-env-515540                                                                                                                     │ force-systemd-env-515540    │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ stop    │ stopped-upgrade-572334 stop                                                                                                                     │ stopped-upgrade-572334      │ jenkins │ v1.35.0 │ 16 Dec 25 04:59 UTC │                     │
	│ start   │ -p stopped-upgrade-572334 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-572334      │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │                     │
	│ start   │ -p missing-upgrade-808405 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-808405      │ jenkins │ v1.35.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ delete  │ -p offline-crio-464497                                                                                                                          │ offline-crio-464497         │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-132399   │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p missing-upgrade-808405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-808405      │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-132399                                                                                                                    │ kubernetes-upgrade-132399   │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-132399   │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │                     │
	│ start   │ -p pause-484421 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-484421                │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 05:00 UTC │
	│ pause   │ -p pause-484421 --alsologtostderr -v=5                                                                                                          │ pause-484421                │ jenkins │ v1.37.0 │ 16 Dec 25 05:00 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:59:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:59:58.534657  214758 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:59:58.534873  214758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:59:58.534881  214758 out.go:374] Setting ErrFile to fd 2...
	I1216 04:59:58.534885  214758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:59:58.535078  214758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:59:58.535502  214758 out.go:368] Setting JSON to false
	I1216 04:59:58.536595  214758 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2549,"bootTime":1765858649,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:59:58.536653  214758 start.go:143] virtualization: kvm guest
	I1216 04:59:58.538524  214758 out.go:179] * [pause-484421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:59:58.539949  214758 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:59:58.539938  214758 notify.go:221] Checking for updates...
	I1216 04:59:58.541355  214758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:59:58.542638  214758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:59:58.543958  214758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:59:58.545099  214758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:59:58.546334  214758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:59:58.547965  214758 config.go:182] Loaded profile config "pause-484421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:59:58.548508  214758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:59:58.574398  214758 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:59:58.574478  214758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:59:58.628836  214758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 04:59:58.619167051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:59:58.628939  214758 docker.go:319] overlay module found
	I1216 04:59:58.630801  214758 out.go:179] * Using the docker driver based on existing profile
	I1216 04:59:58.631966  214758 start.go:309] selected driver: docker
	I1216 04:59:58.631980  214758 start.go:927] validating driver "docker" against &{Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:59:58.632110  214758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:59:58.632194  214758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:59:58.689693  214758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 04:59:58.6808779 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:59:58.690407  214758 cni.go:84] Creating CNI manager for ""
	I1216 04:59:58.690474  214758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:59:58.690523  214758 start.go:353] cluster config:
	{Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:59:58.692267  214758 out.go:179] * Starting "pause-484421" primary control-plane node in "pause-484421" cluster
	I1216 04:59:58.693424  214758 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:59:58.694728  214758 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:59:58.695921  214758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:59:58.696003  214758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:59:58.696035  214758 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 04:59:58.696048  214758 cache.go:65] Caching tarball of preloaded images
	I1216 04:59:58.696157  214758 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 04:59:58.696171  214758 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:59:58.696334  214758 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/config.json ...
	I1216 04:59:58.716081  214758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:59:58.716100  214758 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:59:58.716120  214758 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:59:58.716152  214758 start.go:360] acquireMachinesLock for pause-484421: {Name:mk6ce8f3dbf0021b08c180db8063ac4fd60e84fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:59:58.716229  214758 start.go:364] duration metric: took 58.562µs to acquireMachinesLock for "pause-484421"
	I1216 04:59:58.716247  214758 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:59:58.716253  214758 fix.go:54] fixHost starting: 
	I1216 04:59:58.716520  214758 cli_runner.go:164] Run: docker container inspect pause-484421 --format={{.State.Status}}
	I1216 04:59:58.734060  214758 fix.go:112] recreateIfNeeded on pause-484421: state=Running err=<nil>
	W1216 04:59:58.734093  214758 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:59:59.094207  212055 cli_runner.go:164] Run: docker container inspect missing-upgrade-808405 --format={{.State.Status}}
	W1216 04:59:59.111491  212055 cli_runner.go:211] docker container inspect missing-upgrade-808405 --format={{.State.Status}} returned with exit code 1
	I1216 04:59:59.111575  212055 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-808405": docker container inspect missing-upgrade-808405 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-808405
	I1216 04:59:59.111594  212055 oci.go:673] temporary error: container missing-upgrade-808405 status is  but expect it to be exited
	I1216 04:59:59.111641  212055 oci.go:88] couldn't shut down missing-upgrade-808405 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-808405": docker container inspect missing-upgrade-808405 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-808405
	 
	I1216 04:59:59.111689  212055 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-808405
	I1216 04:59:59.129259  212055 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-808405
	W1216 04:59:59.145906  212055 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-808405 returned with exit code 1
	I1216 04:59:59.146013  212055 cli_runner.go:164] Run: docker network inspect missing-upgrade-808405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:59:59.163125  212055 cli_runner.go:164] Run: docker network rm missing-upgrade-808405
	I1216 04:59:59.302383  212055 fix.go:124] Sleeping 1 second for extra luck!
	I1216 05:00:00.303155  212055 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:59:57.159244  213062 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:59:57.327210  213062 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:59:57.371454  213062 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:59:57.421939  213062 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:59:57.422014  213062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:59:57.922157  213062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:59:57.936235  213062 api_server.go:72] duration metric: took 514.303047ms to wait for apiserver process to appear ...
	I1216 04:59:57.936317  213062 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:59:57.936341  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 04:59:57.936725  213062 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 04:59:58.437159  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 04:59:58.735847  214758 out.go:252] * Updating the running docker "pause-484421" container ...
	I1216 04:59:58.735890  214758 machine.go:94] provisionDockerMachine start ...
	I1216 04:59:58.735965  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:58.752985  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:58.753267  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:58.753288  214758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:59:58.876799  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484421
	
	I1216 04:59:58.876823  214758 ubuntu.go:182] provisioning hostname "pause-484421"
	I1216 04:59:58.876869  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:58.895704  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:58.896014  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:58.896047  214758 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-484421 && echo "pause-484421" | sudo tee /etc/hostname
	I1216 04:59:59.028730  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484421
	
	I1216 04:59:59.028801  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.045814  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:59.046097  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:59.046121  214758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-484421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-484421/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-484421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:59:59.171430  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:59:59.171462  214758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 04:59:59.171497  214758 ubuntu.go:190] setting up certificates
	I1216 04:59:59.171511  214758 provision.go:84] configureAuth start
	I1216 04:59:59.171571  214758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484421
	I1216 04:59:59.190866  214758 provision.go:143] copyHostCerts
	I1216 04:59:59.190933  214758 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 04:59:59.190951  214758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 04:59:59.191009  214758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 04:59:59.191132  214758 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 04:59:59.191143  214758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 04:59:59.191173  214758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 04:59:59.191265  214758 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 04:59:59.191273  214758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 04:59:59.191297  214758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 04:59:59.191359  214758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.pause-484421 san=[127.0.0.1 192.168.85.2 localhost minikube pause-484421]
	I1216 04:59:59.285172  214758 provision.go:177] copyRemoteCerts
	I1216 04:59:59.285235  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:59:59.285283  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.302603  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 04:59:59.394488  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:59:59.411517  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 04:59:59.429256  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 04:59:59.446112  214758 provision.go:87] duration metric: took 274.583519ms to configureAuth
	I1216 04:59:59.446135  214758 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:59:59.446375  214758 config.go:182] Loaded profile config "pause-484421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:59:59.446497  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.463867  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:59.464107  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:59.464124  214758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:59:59.775223  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:59:59.775251  214758 machine.go:97] duration metric: took 1.039348463s to provisionDockerMachine
	I1216 04:59:59.775266  214758 start.go:293] postStartSetup for "pause-484421" (driver="docker")
	I1216 04:59:59.775277  214758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:59:59.775346  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:59:59.775381  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.792773  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 04:59:59.885498  214758 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:59:59.889135  214758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:59:59.889165  214758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:59:59.889175  214758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 04:59:59.889224  214758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 04:59:59.889323  214758 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 04:59:59.889443  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:59:59.896752  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 04:59:59.913354  214758 start.go:296] duration metric: took 138.074334ms for postStartSetup
	I1216 04:59:59.913413  214758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:59:59.913448  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.931107  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 05:00:00.019568  214758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:00:00.024476  214758 fix.go:56] duration metric: took 1.308216106s for fixHost
	I1216 05:00:00.024506  214758 start.go:83] releasing machines lock for "pause-484421", held for 1.30826709s
	I1216 05:00:00.024569  214758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484421
	I1216 05:00:00.042385  214758 ssh_runner.go:195] Run: cat /version.json
	I1216 05:00:00.042429  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 05:00:00.042481  214758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:00:00.042548  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 05:00:00.059827  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 05:00:00.060126  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 05:00:00.199335  214758 ssh_runner.go:195] Run: systemctl --version
	I1216 05:00:00.206614  214758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:00:00.240521  214758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:00:00.245629  214758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:00:00.245687  214758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:00:00.253668  214758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:00:00.253696  214758 start.go:496] detecting cgroup driver to use...
	I1216 05:00:00.253726  214758 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:00:00.253778  214758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:00:00.267426  214758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:00:00.279074  214758 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:00:00.279133  214758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:00:00.292874  214758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:00:00.304929  214758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:00:00.434165  214758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:00:00.563109  214758 docker.go:234] disabling docker service ...
	I1216 05:00:00.563174  214758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:00:00.578112  214758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:00:00.592535  214758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:00:00.722460  214758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:00:00.857471  214758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:00:00.870548  214758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:00:00.887248  214758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:00:00.887309  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.904489  214758 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:00:00.904558  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.914005  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.923639  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.933411  214758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:00:00.941672  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.951191  214758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.960109  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.969640  214758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:00:00.977545  214758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:00:00.985110  214758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:00:01.093762  214758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:00:02.150039  214758 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.05621738s)
	I1216 05:00:02.150075  214758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:00:02.150130  214758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:00:02.154274  214758 start.go:564] Will wait 60s for crictl version
	I1216 05:00:02.154334  214758 ssh_runner.go:195] Run: which crictl
	I1216 05:00:02.157833  214758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:00:02.182669  214758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:00:02.182771  214758 ssh_runner.go:195] Run: crio --version
	I1216 05:00:02.210979  214758 ssh_runner.go:195] Run: crio --version
	I1216 05:00:02.244494  214758 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 04:59:58.608446  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 04:59:58.608484  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:02.246071  214758 cli_runner.go:164] Run: docker network inspect pause-484421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:00:02.262859  214758 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:00:02.267076  214758 kubeadm.go:884] updating cluster {Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:00:02.267213  214758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:00:02.267260  214758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:00:02.297131  214758 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:00:02.297152  214758 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:00:02.297208  214758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:00:02.323135  214758 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:00:02.323161  214758 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:00:02.323170  214758 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1216 05:00:02.323281  214758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-484421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:00:02.323365  214758 ssh_runner.go:195] Run: crio config
	I1216 05:00:02.369739  214758 cni.go:84] Creating CNI manager for ""
	I1216 05:00:02.369766  214758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:00:02.369786  214758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:00:02.369817  214758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-484421 NodeName:pause-484421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:00:02.370013  214758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-484421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:00:02.370111  214758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:00:02.378462  214758 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:00:02.378533  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:00:02.386530  214758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 05:00:02.399684  214758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:00:02.413623  214758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1216 05:00:02.428170  214758 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:00:02.432555  214758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:00:02.578949  214758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:00:02.593201  214758 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421 for IP: 192.168.85.2
	I1216 05:00:02.593228  214758 certs.go:195] generating shared ca certs ...
	I1216 05:00:02.593249  214758 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:00:02.593436  214758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:00:02.593497  214758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:00:02.593511  214758 certs.go:257] generating profile certs ...
	I1216 05:00:02.593649  214758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.key
	I1216 05:00:02.593743  214758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/apiserver.key.29cd6c06
	I1216 05:00:02.593804  214758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/proxy-client.key
	I1216 05:00:02.593970  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:00:02.594029  214758 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:00:02.594044  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:00:02.594084  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:00:02.594124  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:00:02.594162  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:00:02.594236  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:00:02.594935  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:00:02.614371  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:00:02.632629  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:00:02.650795  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:00:02.668408  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:00:02.686090  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:00:02.703270  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:00:02.720786  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:00:02.737906  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:00:02.755428  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:00:02.773930  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:00:02.791706  214758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:00:02.804954  214758 ssh_runner.go:195] Run: openssl version
	I1216 05:00:02.811200  214758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.818518  214758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:00:02.826110  214758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.829902  214758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.829965  214758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.865796  214758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:00:02.873696  214758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.881541  214758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:00:02.889515  214758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.893246  214758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.893290  214758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.928055  214758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:00:02.936281  214758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:00:02.943858  214758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:00:02.952283  214758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:00:02.956089  214758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:00:02.956149  214758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:00:03.004414  214758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:00:03.012665  214758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:00:03.016655  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:00:03.053746  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:00:03.090083  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:00:03.127767  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:00:03.171873  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:00:03.206867  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:00:03.245529  214758 kubeadm.go:401] StartCluster: {Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:00:03.245632  214758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:00:03.245696  214758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:00:03.273391  214758 cri.go:89] found id: "d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a"
	I1216 05:00:03.273413  214758 cri.go:89] found id: "968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba"
	I1216 05:00:03.273417  214758 cri.go:89] found id: "d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a"
	I1216 05:00:03.273420  214758 cri.go:89] found id: "ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764"
	I1216 05:00:03.273423  214758 cri.go:89] found id: "cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c"
	I1216 05:00:03.273426  214758 cri.go:89] found id: "58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18"
	I1216 05:00:03.273429  214758 cri.go:89] found id: "458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc"
	I1216 05:00:03.273432  214758 cri.go:89] found id: ""
	I1216 05:00:03.273469  214758 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:00:03.285620  214758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:00:03Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:00:03.285685  214758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:00:03.293628  214758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:00:03.293649  214758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:00:03.293701  214758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:00:03.300963  214758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:00:03.301762  214758 kubeconfig.go:125] found "pause-484421" server: "https://192.168.85.2:8443"
	I1216 05:00:03.302899  214758 kapi.go:59] client config for pause-484421: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.key", CAFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:00:03.303318  214758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 05:00:03.303338  214758 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 05:00:03.303351  214758 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 05:00:03.303358  214758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 05:00:03.303362  214758 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 05:00:03.303672  214758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:00:03.311091  214758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 05:00:03.311123  214758 kubeadm.go:602] duration metric: took 17.469509ms to restartPrimaryControlPlane
	I1216 05:00:03.311132  214758 kubeadm.go:403] duration metric: took 65.612349ms to StartCluster
	I1216 05:00:03.311147  214758 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:00:03.311218  214758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:00:03.312300  214758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:00:03.349174  214758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:00:03.349299  214758 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:00:03.349469  214758 config.go:182] Loaded profile config "pause-484421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:00:03.376865  214758 out.go:179] * Verifying Kubernetes components...
	I1216 05:00:03.376859  214758 out.go:179] * Enabled addons: 
	I1216 05:00:00.305302  212055 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:00:00.305462  212055 start.go:159] libmachine.API.Create for "missing-upgrade-808405" (driver="docker")
	I1216 05:00:00.305490  212055 client.go:173] LocalClient.Create starting
	I1216 05:00:00.305593  212055 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:00:00.305636  212055 main.go:143] libmachine: Decoding PEM data...
	I1216 05:00:00.305653  212055 main.go:143] libmachine: Parsing certificate...
	I1216 05:00:00.305718  212055 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:00:00.305736  212055 main.go:143] libmachine: Decoding PEM data...
	I1216 05:00:00.305748  212055 main.go:143] libmachine: Parsing certificate...
	I1216 05:00:00.306004  212055 cli_runner.go:164] Run: docker network inspect missing-upgrade-808405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:00:00.322836  212055 cli_runner.go:211] docker network inspect missing-upgrade-808405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:00:00.322902  212055 network_create.go:284] running [docker network inspect missing-upgrade-808405] to gather additional debugging logs...
	I1216 05:00:00.322924  212055 cli_runner.go:164] Run: docker network inspect missing-upgrade-808405
	W1216 05:00:00.343107  212055 cli_runner.go:211] docker network inspect missing-upgrade-808405 returned with exit code 1
	I1216 05:00:00.343182  212055 network_create.go:287] error running [docker network inspect missing-upgrade-808405]: docker network inspect missing-upgrade-808405: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-808405 not found
	I1216 05:00:00.343246  212055 network_create.go:289] output of [docker network inspect missing-upgrade-808405]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-808405 not found
	
	** /stderr **
	I1216 05:00:00.343392  212055 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:00:00.364590  212055 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:00:00.365338  212055 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:00:00.366142  212055 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:00:00.367092  212055 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f90356ecb2b1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:f4:5a:57:87:23} reservation:<nil>}
	I1216 05:00:00.367872  212055 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-db2b418b51b2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:1a:e8:f9:ed:29:d6} reservation:<nil>}
	I1216 05:00:00.369002  212055 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020a9930}
	I1216 05:00:00.369057  212055 network_create.go:124] attempt to create docker network missing-upgrade-808405 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 05:00:00.369120  212055 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-808405 missing-upgrade-808405
	I1216 05:00:00.420983  212055 network_create.go:108] docker network missing-upgrade-808405 192.168.94.0/24 created
	I1216 05:00:00.421041  212055 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-808405" container
	I1216 05:00:00.421104  212055 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:00:00.439923  212055 cli_runner.go:164] Run: docker volume create missing-upgrade-808405 --label name.minikube.sigs.k8s.io=missing-upgrade-808405 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:00:00.456401  212055 oci.go:103] Successfully created a docker volume missing-upgrade-808405
	I1216 05:00:00.456492  212055 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-808405-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-808405 --entrypoint /usr/bin/test -v missing-upgrade-808405:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1216 05:00:00.744288  212055 oci.go:107] Successfully prepared a docker volume missing-upgrade-808405
	I1216 05:00:00.744357  212055 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 05:00:00.744374  212055 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:00:00.744447  212055 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-808405:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:00:03.423164  214758 addons.go:530] duration metric: took 73.876988ms for enable addons: enabled=[]
	I1216 05:00:03.423255  214758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:00:03.534994  214758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:00:03.549283  214758 node_ready.go:35] waiting up to 6m0s for node "pause-484421" to be "Ready" ...
	I1216 05:00:03.556778  214758 node_ready.go:49] node "pause-484421" is "Ready"
	I1216 05:00:03.556804  214758 node_ready.go:38] duration metric: took 7.480978ms for node "pause-484421" to be "Ready" ...
	I1216 05:00:03.556817  214758 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:00:03.556861  214758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:00:03.568869  214758 api_server.go:72] duration metric: took 219.637604ms to wait for apiserver process to appear ...
	I1216 05:00:03.568901  214758 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:00:03.568921  214758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:00:03.572858  214758 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:00:03.573799  214758 api_server.go:141] control plane version: v1.34.2
	I1216 05:00:03.573823  214758 api_server.go:131] duration metric: took 4.91514ms to wait for apiserver health ...
	I1216 05:00:03.573835  214758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:00:03.577334  214758 system_pods.go:59] 7 kube-system pods found
	I1216 05:00:03.577370  214758 system_pods.go:61] "coredns-66bc5c9577-b27l5" [bb106fc4-4ab0-4e07-bed9-6af94a348f88] Running
	I1216 05:00:03.577379  214758 system_pods.go:61] "etcd-pause-484421" [8bdf54c9-6a67-415b-ae16-a0640e2d1d8b] Running
	I1216 05:00:03.577384  214758 system_pods.go:61] "kindnet-tmm6g" [e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1] Running
	I1216 05:00:03.577390  214758 system_pods.go:61] "kube-apiserver-pause-484421" [14dc9352-7c85-4698-acdb-1f77b2fddd8b] Running
	I1216 05:00:03.577395  214758 system_pods.go:61] "kube-controller-manager-pause-484421" [ab6d7dd2-de5c-4549-835f-927c8964672b] Running
	I1216 05:00:03.577401  214758 system_pods.go:61] "kube-proxy-jlk4q" [f8ef9bea-11c1-4531-973c-debd3fcc0b5f] Running
	I1216 05:00:03.577408  214758 system_pods.go:61] "kube-scheduler-pause-484421" [f4182356-68e1-4dbb-a344-2a4fd652b8c8] Running
	I1216 05:00:03.577419  214758 system_pods.go:74] duration metric: took 3.576816ms to wait for pod list to return data ...
	I1216 05:00:03.577432  214758 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:00:03.579342  214758 default_sa.go:45] found service account: "default"
	I1216 05:00:03.579362  214758 default_sa.go:55] duration metric: took 1.92427ms for default service account to be created ...
	I1216 05:00:03.579372  214758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:00:03.581650  214758 system_pods.go:86] 7 kube-system pods found
	I1216 05:00:03.581671  214758 system_pods.go:89] "coredns-66bc5c9577-b27l5" [bb106fc4-4ab0-4e07-bed9-6af94a348f88] Running
	I1216 05:00:03.581676  214758 system_pods.go:89] "etcd-pause-484421" [8bdf54c9-6a67-415b-ae16-a0640e2d1d8b] Running
	I1216 05:00:03.581681  214758 system_pods.go:89] "kindnet-tmm6g" [e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1] Running
	I1216 05:00:03.581686  214758 system_pods.go:89] "kube-apiserver-pause-484421" [14dc9352-7c85-4698-acdb-1f77b2fddd8b] Running
	I1216 05:00:03.581692  214758 system_pods.go:89] "kube-controller-manager-pause-484421" [ab6d7dd2-de5c-4549-835f-927c8964672b] Running
	I1216 05:00:03.581696  214758 system_pods.go:89] "kube-proxy-jlk4q" [f8ef9bea-11c1-4531-973c-debd3fcc0b5f] Running
	I1216 05:00:03.581701  214758 system_pods.go:89] "kube-scheduler-pause-484421" [f4182356-68e1-4dbb-a344-2a4fd652b8c8] Running
	I1216 05:00:03.581712  214758 system_pods.go:126] duration metric: took 2.333298ms to wait for k8s-apps to be running ...
	I1216 05:00:03.581724  214758 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:00:03.581765  214758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:00:03.594264  214758 system_svc.go:56] duration metric: took 12.53497ms WaitForService to wait for kubelet
	I1216 05:00:03.594286  214758 kubeadm.go:587] duration metric: took 245.062744ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:00:03.594312  214758 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:00:03.596992  214758 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:00:03.597053  214758 node_conditions.go:123] node cpu capacity is 8
	I1216 05:00:03.597070  214758 node_conditions.go:105] duration metric: took 2.753664ms to run NodePressure ...
	I1216 05:00:03.597086  214758 start.go:242] waiting for startup goroutines ...
	I1216 05:00:03.597100  214758 start.go:247] waiting for cluster config update ...
	I1216 05:00:03.597111  214758 start.go:256] writing updated cluster config ...
	I1216 05:00:03.599537  214758 ssh_runner.go:195] Run: rm -f paused
	I1216 05:00:03.603523  214758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:00:03.604289  214758 kapi.go:59] client config for pause-484421: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.key", CAFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:00:03.606853  214758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b27l5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.610749  214758 pod_ready.go:94] pod "coredns-66bc5c9577-b27l5" is "Ready"
	I1216 05:00:03.610768  214758 pod_ready.go:86] duration metric: took 3.897182ms for pod "coredns-66bc5c9577-b27l5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.612494  214758 pod_ready.go:83] waiting for pod "etcd-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.615806  214758 pod_ready.go:94] pod "etcd-pause-484421" is "Ready"
	I1216 05:00:03.615822  214758 pod_ready.go:86] duration metric: took 3.310556ms for pod "etcd-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.617576  214758 pod_ready.go:83] waiting for pod "kube-apiserver-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.621042  214758 pod_ready.go:94] pod "kube-apiserver-pause-484421" is "Ready"
	I1216 05:00:03.621058  214758 pod_ready.go:86] duration metric: took 3.464745ms for pod "kube-apiserver-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.622566  214758 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.007416  214758 pod_ready.go:94] pod "kube-controller-manager-pause-484421" is "Ready"
	I1216 05:00:04.007449  214758 pod_ready.go:86] duration metric: took 384.86328ms for pod "kube-controller-manager-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.207312  214758 pod_ready.go:83] waiting for pod "kube-proxy-jlk4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.607513  214758 pod_ready.go:94] pod "kube-proxy-jlk4q" is "Ready"
	I1216 05:00:04.607543  214758 pod_ready.go:86] duration metric: took 400.205343ms for pod "kube-proxy-jlk4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.807898  214758 pod_ready.go:83] waiting for pod "kube-scheduler-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:05.207446  214758 pod_ready.go:94] pod "kube-scheduler-pause-484421" is "Ready"
	I1216 05:00:05.207476  214758 pod_ready.go:86] duration metric: took 399.552255ms for pod "kube-scheduler-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:05.207488  214758 pod_ready.go:40] duration metric: took 1.603940339s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:00:05.259310  214758 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:00:05.261737  214758 out.go:179] * Done! kubectl is now configured to use "pause-484421" cluster and "default" namespace by default
	I1216 05:00:03.438508  213062 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 05:00:03.438591  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:00:03.609364  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 05:00:03.609402  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:04.411666  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:37718->192.168.103.2:8443: read: connection reset by peer
	I1216 05:00:04.411725  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:04.412092  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:04.605472  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:04.605879  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:05.105172  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:05.105621  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:05.605098  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:05.605536  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:06.105140  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:06.105566  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:06.605094  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:06.605480  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:07.105114  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:07.105464  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052116176Z" level=info msg="RDT not available in the host system"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052128778Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052900505Z" level=info msg="Conmon does support the --sync option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052915671Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052931557Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.05361757Z" level=info msg="Conmon does support the --sync option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.053638432Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.057635631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.057659142Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.058424242Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.058922496Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.058984623Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.144664802Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-b27l5 Namespace:kube-system ID:c6bf05bd0501c624db05e187ea29b5ed88478aeede5b5098b5c73b8cfa5523e2 UID:bb106fc4-4ab0-4e07-bed9-6af94a348f88 NetNS:/var/run/netns/00bab8b3-6cf1-472f-8a10-759235ef7fc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000146498}] Aliases:map[]}"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.144902282Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-b27l5 for CNI network kindnet (type=ptp)"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145480707Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.14551377Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145580378Z" level=info msg="Create NRI interface"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145694618Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145707778Z" level=info msg="runtime interface created"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145718442Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.14572344Z" level=info msg="runtime interface starting up..."
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.14572857Z" level=info msg="starting plugins..."
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145739436Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.146030521Z" level=info msg="No systemd watchdog enabled"
	Dec 16 05:00:02 pause-484421 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d9defad5f5e81       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago       Running             coredns                   0                   c6bf05bd0501c       coredns-66bc5c9577-b27l5               kube-system
	968f87f2624b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   54 seconds ago       Running             kindnet-cni               0                   e2fb888bcbcce       kindnet-tmm6g                          kube-system
	d91dd18a4d06c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   54 seconds ago       Running             kube-proxy                0                   9d5e4fbfce059       kube-proxy-jlk4q                       kube-system
	ec7e20b14ddeb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   cc75c981cb5ca       etcd-pause-484421                      kube-system
	cb8a304fb27d8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   6ec9282222931       kube-controller-manager-pause-484421   kube-system
	58e9c0db8756d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   c2248d20354a1       kube-scheduler-pause-484421            kube-system
	458bca56d150f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   5e0398d12ead1       kube-apiserver-pause-484421            kube-system
	
	
	==> coredns [d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43188 - 25060 "HINFO IN 454759291112319387.1071349700529759130. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.020286519s
	
	
	==> describe nodes <==
	Name:               pause-484421
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-484421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=pause-484421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T04_59_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 04:59:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-484421
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 04:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-484421
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                e2c99160-5189-445d-ad90-cc9a32c3ca20
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b27l5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-pause-484421                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-tmm6g                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-pause-484421             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-pause-484421    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-jlk4q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-484421             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 66s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node pause-484421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node pause-484421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x8 over 66s)  kubelet          Node pause-484421 status is now: NodeHasSufficientPID
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s                kubelet          Node pause-484421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s                kubelet          Node pause-484421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s                kubelet          Node pause-484421 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node pause-484421 event: Registered Node pause-484421 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-484421 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764] <==
	{"level":"warn","ts":"2025-12-16T04:59:04.192059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.202173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.211534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.222050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.230758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.245285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.279465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.285532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.363113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:13.375916Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.441792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-12-16T04:59:13.376061Z","caller":"traceutil/trace.go:172","msg":"trace[99070168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:360; }","duration":"131.623625ms","start":"2025-12-16T04:59:13.244422Z","end":"2025-12-16T04:59:13.376046Z","steps":["trace[99070168] 'agreement among raft nodes before linearized reading'  (duration: 49.281422ms)","trace[99070168] 'range keys from in-memory index tree'  (duration: 82.073565ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:59:13.375967Z","caller":"traceutil/trace.go:172","msg":"trace[1910788047] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"151.59471ms","start":"2025-12-16T04:59:13.224353Z","end":"2025-12-16T04:59:13.375948Z","steps":["trace[1910788047] 'process raft request'  (duration: 151.551386ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:13.375981Z","caller":"traceutil/trace.go:172","msg":"trace[1475198763] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"151.690557ms","start":"2025-12-16T04:59:13.224271Z","end":"2025-12-16T04:59:13.375962Z","steps":["trace[1475198763] 'process raft request'  (duration: 69.488272ms)","trace[1475198763] 'compare'  (duration: 82.029882ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:59:16.945855Z","caller":"traceutil/trace.go:172","msg":"trace[771357245] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"135.148202ms","start":"2025-12-16T04:59:16.810688Z","end":"2025-12-16T04:59:16.945836Z","steps":["trace[771357245] 'process raft request'  (duration: 134.362102ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:55.054711Z","caller":"traceutil/trace.go:172","msg":"trace[542602384] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"169.322853ms","start":"2025-12-16T04:59:54.885366Z","end":"2025-12-16T04:59:55.054689Z","steps":["trace[542602384] 'process raft request'  (duration: 169.200287ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:55.271363Z","caller":"traceutil/trace.go:172","msg":"trace[1152143919] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"208.880611ms","start":"2025-12-16T04:59:55.062460Z","end":"2025-12-16T04:59:55.271340Z","steps":["trace[1152143919] 'process raft request'  (duration: 142.240207ms)","trace[1152143919] 'compare'  (duration: 66.535419ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:59:55.545342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.162112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-484421\" limit:1 ","response":"range_response_count:1 size:5412"}
	{"level":"info","ts":"2025-12-16T04:59:55.545408Z","caller":"traceutil/trace.go:172","msg":"trace[542468568] range","detail":"{range_begin:/registry/minions/pause-484421; range_end:; response_count:1; response_revision:429; }","duration":"143.233845ms","start":"2025-12-16T04:59:55.402157Z","end":"2025-12-16T04:59:55.545391Z","steps":["trace[542468568] 'agreement among raft nodes before linearized reading'  (duration: 80.696175ms)","trace[542468568] 'range keys from in-memory index tree'  (duration: 62.423681ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:59:55.545404Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.50687ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:59:55.545425Z","caller":"traceutil/trace.go:172","msg":"trace[564624687] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"270.081591ms","start":"2025-12-16T04:59:55.275330Z","end":"2025-12-16T04:59:55.545412Z","steps":["trace[564624687] 'process raft request'  (duration: 207.627992ms)","trace[564624687] 'compare'  (duration: 62.373248ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:59:55.545452Z","caller":"traceutil/trace.go:172","msg":"trace[1530607488] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:429; }","duration":"113.564521ms","start":"2025-12-16T04:59:55.431879Z","end":"2025-12-16T04:59:55.545443Z","steps":["trace[1530607488] 'range keys from in-memory index tree'  (duration: 113.492566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:56.730445Z","caller":"traceutil/trace.go:172","msg":"trace[1448634030] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:455; }","duration":"179.751872ms","start":"2025-12-16T04:59:56.550653Z","end":"2025-12-16T04:59:56.730405Z","steps":["trace[1448634030] 'read index received'  (duration: 179.744243ms)","trace[1448634030] 'applied index is now lower than readState.Index'  (duration: 6.658µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:59:56.730542Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.872036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:59:56.730575Z","caller":"traceutil/trace.go:172","msg":"trace[364574620] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"209.611303ms","start":"2025-12-16T04:59:56.520953Z","end":"2025-12-16T04:59:56.730565Z","steps":["trace[364574620] 'process raft request'  (duration: 209.50277ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:56.730590Z","caller":"traceutil/trace.go:172","msg":"trace[1530804081] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:436; }","duration":"179.93796ms","start":"2025-12-16T04:59:56.550643Z","end":"2025-12-16T04:59:56.730581Z","steps":["trace[1530804081] 'agreement among raft nodes before linearized reading'  (duration: 179.842664ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:00:08 up 42 min,  0 user,  load average: 4.84, 2.50, 1.62
	Linux pause-484421 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba] <==
	I1216 04:59:14.221966       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 04:59:14.222242       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 04:59:14.222430       1 main.go:148] setting mtu 1500 for CNI 
	I1216 04:59:14.222451       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 04:59:14.222479       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T04:59:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 04:59:14.422460       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 04:59:14.422500       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 04:59:14.422513       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 04:59:14.422793       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 04:59:44.423287       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 04:59:44.423287       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 04:59:44.423290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1216 04:59:44.423290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1216 04:59:45.923360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 04:59:45.923405       1 metrics.go:72] Registering metrics
	I1216 04:59:45.923483       1 controller.go:711] "Syncing nftables rules"
	I1216 04:59:54.429541       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 04:59:54.429614       1 main.go:301] handling current node
	I1216 05:00:04.427102       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:00:04.427151       1 main.go:301] handling current node
	
	
	==> kube-apiserver [458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc] <==
	I1216 04:59:05.239124       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 04:59:05.239231       1 policy_source.go:240] refreshing policies
	I1216 04:59:05.251844       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 04:59:05.262417       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:05.268522       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 04:59:05.274390       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1216 04:59:05.328580       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:05.328671       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 04:59:06.031142       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 04:59:06.036412       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 04:59:06.036435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 04:59:06.562644       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 04:59:06.605132       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 04:59:06.739106       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 04:59:06.745326       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1216 04:59:06.746410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 04:59:06.750626       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 04:59:07.350346       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 04:59:07.517845       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 04:59:07.529924       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 04:59:07.538453       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 04:59:13.048867       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1216 04:59:13.151891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:13.157777       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:13.381936       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c] <==
	I1216 04:59:12.347056       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 04:59:12.347066       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 04:59:12.347107       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 04:59:12.347151       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 04:59:12.347197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 04:59:12.348877       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:59:12.348912       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 04:59:12.349081       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 04:59:12.349140       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-484421"
	I1216 04:59:12.349176       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1216 04:59:12.349397       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 04:59:12.351373       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 04:59:12.352573       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 04:59:12.352630       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 04:59:12.352673       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 04:59:12.352685       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 04:59:12.352693       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 04:59:12.354871       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 04:59:12.355112       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:59:12.359302       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 04:59:12.360295       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-484421" podCIDRs=["10.244.0.0/24"]
	I1216 04:59:12.363609       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 04:59:12.366176       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 04:59:12.373402       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:59:57.353316       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a] <==
	I1216 04:59:14.083363       1 server_linux.go:53] "Using iptables proxy"
	I1216 04:59:14.158816       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 04:59:14.259861       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 04:59:14.259908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 04:59:14.260054       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 04:59:14.279615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 04:59:14.279689       1 server_linux.go:132] "Using iptables Proxier"
	I1216 04:59:14.285652       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 04:59:14.286455       1 server.go:527] "Version info" version="v1.34.2"
	I1216 04:59:14.286483       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 04:59:14.288565       1 config.go:200] "Starting service config controller"
	I1216 04:59:14.288590       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 04:59:14.288603       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 04:59:14.288620       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 04:59:14.288624       1 config.go:106] "Starting endpoint slice config controller"
	I1216 04:59:14.288630       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 04:59:14.288673       1 config.go:309] "Starting node config controller"
	I1216 04:59:14.288679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 04:59:14.288685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 04:59:14.389647       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 04:59:14.389704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 04:59:14.389773       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18] <==
	E1216 04:59:05.231734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:59:05.231881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:59:05.231928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:59:05.231975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 04:59:05.234142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:59:05.234293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:59:05.234355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:59:05.234399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 04:59:05.234443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:59:05.234555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 04:59:05.234626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:59:05.234693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 04:59:05.234754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:59:05.234839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:59:05.234898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 04:59:05.234962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:59:05.236539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 04:59:05.244243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 04:59:05.256079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:59:06.128527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:59:06.219548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:59:06.249142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:59:06.251044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:59:06.260446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1216 04:59:06.720631       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 04:59:13 pause-484421 kubelet[1320]: E1216 04:59:13.222812    1320 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 16 04:59:13 pause-484421 kubelet[1320]: E1216 04:59:13.222839    1320 projected.go:196] Error preparing data for projected volume kube-api-access-vt8zp for pod kube-system/kindnet-tmm6g: configmap "kube-root-ca.crt" not found
	Dec 16 04:59:13 pause-484421 kubelet[1320]: E1216 04:59:13.222911    1320 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1-kube-api-access-vt8zp podName:e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1 nodeName:}" failed. No retries permitted until 2025-12-16 04:59:13.722891489 +0000 UTC m=+6.441346801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vt8zp" (UniqueName: "kubernetes.io/projected/e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1-kube-api-access-vt8zp") pod "kindnet-tmm6g" (UID: "e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1") : configmap "kube-root-ca.crt" not found
	Dec 16 04:59:14 pause-484421 kubelet[1320]: I1216 04:59:14.455370    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tmm6g" podStartSLOduration=1.455349206 podStartE2EDuration="1.455349206s" podCreationTimestamp="2025-12-16 04:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:59:14.441507412 +0000 UTC m=+7.159962732" watchObservedRunningTime="2025-12-16 04:59:14.455349206 +0000 UTC m=+7.173804527"
	Dec 16 04:59:14 pause-484421 kubelet[1320]: I1216 04:59:14.465298    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jlk4q" podStartSLOduration=1.465275795 podStartE2EDuration="1.465275795s" podCreationTimestamp="2025-12-16 04:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:59:14.465268334 +0000 UTC m=+7.183723654" watchObservedRunningTime="2025-12-16 04:59:14.465275795 +0000 UTC m=+7.183731116"
	Dec 16 04:59:54 pause-484421 kubelet[1320]: I1216 04:59:54.880069    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 04:59:55 pause-484421 kubelet[1320]: I1216 04:59:55.703326    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb106fc4-4ab0-4e07-bed9-6af94a348f88-config-volume\") pod \"coredns-66bc5c9577-b27l5\" (UID: \"bb106fc4-4ab0-4e07-bed9-6af94a348f88\") " pod="kube-system/coredns-66bc5c9577-b27l5"
	Dec 16 04:59:55 pause-484421 kubelet[1320]: I1216 04:59:55.703371    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqz8\" (UniqueName: \"kubernetes.io/projected/bb106fc4-4ab0-4e07-bed9-6af94a348f88-kube-api-access-clqz8\") pod \"coredns-66bc5c9577-b27l5\" (UID: \"bb106fc4-4ab0-4e07-bed9-6af94a348f88\") " pod="kube-system/coredns-66bc5c9577-b27l5"
	Dec 16 04:59:56 pause-484421 kubelet[1320]: I1216 04:59:56.732353    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b27l5" podStartSLOduration=43.73233448 podStartE2EDuration="43.73233448s" podCreationTimestamp="2025-12-16 04:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:59:56.732166223 +0000 UTC m=+49.450621543" watchObservedRunningTime="2025-12-16 04:59:56.73233448 +0000 UTC m=+49.450789799"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.393213    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393319    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393410    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393428    1320 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393443    1320 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.493607    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.524826    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.524878    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.524892    1320 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.655080    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.922007    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:02 pause-484421 kubelet[1320]: E1216 05:00:02.412180    1320 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Dec 16 05:00:05 pause-484421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:00:05 pause-484421 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:00:05 pause-484421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:05 pause-484421 systemd[1]: kubelet.service: Consumed 2.204s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484421 -n pause-484421
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484421 -n pause-484421: exit status 2 (343.640996ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-484421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-484421
helpers_test.go:244: (dbg) docker inspect pause-484421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e",
	        "Created": "2025-12-16T04:58:45.466722374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195048,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:58:45.910170606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/hosts",
	        "LogPath": "/var/lib/docker/containers/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e/84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e-json.log",
	        "Name": "/pause-484421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-484421:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-484421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "84295152ef7986fcd4446fb138b9a9afb1884454b1ef42c39592e598c3232b6e",
	                "LowerDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee08329c71ed8656934d7f878d205904438c61474f4f4033f200db192505af5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-484421",
	                "Source": "/var/lib/docker/volumes/pause-484421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-484421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-484421",
	                "name.minikube.sigs.k8s.io": "pause-484421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fcdeaa3cb627a3ae98438e58cf5d3896559b464d98dff5dc2a58b7e7dadd39cb",
	            "SandboxKey": "/var/run/docker/netns/fcdeaa3cb627",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-484421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db2b418b51b2872ed7829d45469e7e2e3e75d46725a64a617e02cd49a4a698b0",
	                    "EndpointID": "efad74be823058e6c8cceabf3078d80a2f63e5c53ec08920f2f5a6a39d12484b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2e:e1:85:12:b8:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-484421",
	                        "84295152ef79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-484421 -n pause-484421
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-484421 -n pause-484421: exit status 2 (329.716321ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-484421 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --cancel-scheduled                                                                                                     │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │ 16 Dec 25 04:57 UTC │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │                     │
	│ stop    │ -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:57 UTC │ 16 Dec 25 04:57 UTC │
	│ delete  │ -p scheduled-stop-694840                                                                                                                        │ scheduled-stop-694840       │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:58 UTC │
	│ start   │ -p insufficient-storage-446584 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                │ insufficient-storage-446584 │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │                     │
	│ delete  │ -p insufficient-storage-446584                                                                                                                  │ insufficient-storage-446584 │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:58 UTC │
	│ start   │ -p pause-484421 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-484421                │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p force-systemd-env-515540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                      │ force-systemd-env-515540    │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p offline-crio-464497 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                               │ offline-crio-464497         │ jenkins │ v1.37.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p stopped-upgrade-572334 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-572334      │ jenkins │ v1.35.0 │ 16 Dec 25 04:58 UTC │ 16 Dec 25 04:59 UTC │
	│ delete  │ -p force-systemd-env-515540                                                                                                                     │ force-systemd-env-515540    │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ stop    │ stopped-upgrade-572334 stop                                                                                                                     │ stopped-upgrade-572334      │ jenkins │ v1.35.0 │ 16 Dec 25 04:59 UTC │                     │
	│ start   │ -p stopped-upgrade-572334 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-572334      │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │                     │
	│ start   │ -p missing-upgrade-808405 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-808405      │ jenkins │ v1.35.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ delete  │ -p offline-crio-464497                                                                                                                          │ offline-crio-464497         │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-132399   │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p missing-upgrade-808405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-808405      │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-132399                                                                                                                    │ kubernetes-upgrade-132399   │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 04:59 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-132399   │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │                     │
	│ start   │ -p pause-484421 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-484421                │ jenkins │ v1.37.0 │ 16 Dec 25 04:59 UTC │ 16 Dec 25 05:00 UTC │
	│ pause   │ -p pause-484421 --alsologtostderr -v=5                                                                                                          │ pause-484421                │ jenkins │ v1.37.0 │ 16 Dec 25 05:00 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:59:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:59:58.534657  214758 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:59:58.534873  214758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:59:58.534881  214758 out.go:374] Setting ErrFile to fd 2...
	I1216 04:59:58.534885  214758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:59:58.535078  214758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:59:58.535502  214758 out.go:368] Setting JSON to false
	I1216 04:59:58.536595  214758 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2549,"bootTime":1765858649,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:59:58.536653  214758 start.go:143] virtualization: kvm guest
	I1216 04:59:58.538524  214758 out.go:179] * [pause-484421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:59:58.539949  214758 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:59:58.539938  214758 notify.go:221] Checking for updates...
	I1216 04:59:58.541355  214758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:59:58.542638  214758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:59:58.543958  214758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:59:58.545099  214758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:59:58.546334  214758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:59:58.547965  214758 config.go:182] Loaded profile config "pause-484421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:59:58.548508  214758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:59:58.574398  214758 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:59:58.574478  214758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:59:58.628836  214758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 04:59:58.619167051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:59:58.628939  214758 docker.go:319] overlay module found
	I1216 04:59:58.630801  214758 out.go:179] * Using the docker driver based on existing profile
	I1216 04:59:58.631966  214758 start.go:309] selected driver: docker
	I1216 04:59:58.631980  214758 start.go:927] validating driver "docker" against &{Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:59:58.632110  214758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:59:58.632194  214758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:59:58.689693  214758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 04:59:58.6808779 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:59:58.690407  214758 cni.go:84] Creating CNI manager for ""
	I1216 04:59:58.690474  214758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:59:58.690523  214758 start.go:353] cluster config:
	{Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:59:58.692267  214758 out.go:179] * Starting "pause-484421" primary control-plane node in "pause-484421" cluster
	I1216 04:59:58.693424  214758 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:59:58.694728  214758 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:59:58.695921  214758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:59:58.696003  214758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:59:58.696035  214758 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 04:59:58.696048  214758 cache.go:65] Caching tarball of preloaded images
	I1216 04:59:58.696157  214758 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 04:59:58.696171  214758 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:59:58.696334  214758 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/config.json ...
	I1216 04:59:58.716081  214758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:59:58.716100  214758 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:59:58.716120  214758 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:59:58.716152  214758 start.go:360] acquireMachinesLock for pause-484421: {Name:mk6ce8f3dbf0021b08c180db8063ac4fd60e84fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:59:58.716229  214758 start.go:364] duration metric: took 58.562µs to acquireMachinesLock for "pause-484421"
	I1216 04:59:58.716247  214758 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:59:58.716253  214758 fix.go:54] fixHost starting: 
	I1216 04:59:58.716520  214758 cli_runner.go:164] Run: docker container inspect pause-484421 --format={{.State.Status}}
	I1216 04:59:58.734060  214758 fix.go:112] recreateIfNeeded on pause-484421: state=Running err=<nil>
	W1216 04:59:58.734093  214758 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:59:59.094207  212055 cli_runner.go:164] Run: docker container inspect missing-upgrade-808405 --format={{.State.Status}}
	W1216 04:59:59.111491  212055 cli_runner.go:211] docker container inspect missing-upgrade-808405 --format={{.State.Status}} returned with exit code 1
	I1216 04:59:59.111575  212055 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-808405": docker container inspect missing-upgrade-808405 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-808405
	I1216 04:59:59.111594  212055 oci.go:673] temporary error: container missing-upgrade-808405 status is  but expect it to be exited
	I1216 04:59:59.111641  212055 oci.go:88] couldn't shut down missing-upgrade-808405 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-808405": docker container inspect missing-upgrade-808405 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-808405
	 
	I1216 04:59:59.111689  212055 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-808405
	I1216 04:59:59.129259  212055 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-808405
	W1216 04:59:59.145906  212055 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-808405 returned with exit code 1
	I1216 04:59:59.146013  212055 cli_runner.go:164] Run: docker network inspect missing-upgrade-808405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:59:59.163125  212055 cli_runner.go:164] Run: docker network rm missing-upgrade-808405
	I1216 04:59:59.302383  212055 fix.go:124] Sleeping 1 second for extra luck!
	I1216 05:00:00.303155  212055 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:59:57.159244  213062 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:59:57.327210  213062 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:59:57.371454  213062 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:59:57.421939  213062 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:59:57.422014  213062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:59:57.922157  213062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:59:57.936235  213062 api_server.go:72] duration metric: took 514.303047ms to wait for apiserver process to appear ...
	I1216 04:59:57.936317  213062 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:59:57.936341  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 04:59:57.936725  213062 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 04:59:58.437159  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 04:59:58.735847  214758 out.go:252] * Updating the running docker "pause-484421" container ...
	I1216 04:59:58.735890  214758 machine.go:94] provisionDockerMachine start ...
	I1216 04:59:58.735965  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:58.752985  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:58.753267  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:58.753288  214758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:59:58.876799  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484421
	
	I1216 04:59:58.876823  214758 ubuntu.go:182] provisioning hostname "pause-484421"
	I1216 04:59:58.876869  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:58.895704  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:58.896014  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:58.896047  214758 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-484421 && echo "pause-484421" | sudo tee /etc/hostname
	I1216 04:59:59.028730  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484421
	
	I1216 04:59:59.028801  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.045814  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:59.046097  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:59.046121  214758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-484421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-484421/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-484421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:59:59.171430  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:59:59.171462  214758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 04:59:59.171497  214758 ubuntu.go:190] setting up certificates
	I1216 04:59:59.171511  214758 provision.go:84] configureAuth start
	I1216 04:59:59.171571  214758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484421
	I1216 04:59:59.190866  214758 provision.go:143] copyHostCerts
	I1216 04:59:59.190933  214758 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 04:59:59.190951  214758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 04:59:59.191009  214758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 04:59:59.191132  214758 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 04:59:59.191143  214758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 04:59:59.191173  214758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 04:59:59.191265  214758 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 04:59:59.191273  214758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 04:59:59.191297  214758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 04:59:59.191359  214758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.pause-484421 san=[127.0.0.1 192.168.85.2 localhost minikube pause-484421]
	I1216 04:59:59.285172  214758 provision.go:177] copyRemoteCerts
	I1216 04:59:59.285235  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:59:59.285283  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.302603  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 04:59:59.394488  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:59:59.411517  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 04:59:59.429256  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 04:59:59.446112  214758 provision.go:87] duration metric: took 274.583519ms to configureAuth
	I1216 04:59:59.446135  214758 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:59:59.446375  214758 config.go:182] Loaded profile config "pause-484421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:59:59.446497  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.463867  214758 main.go:143] libmachine: Using SSH client type: native
	I1216 04:59:59.464107  214758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1216 04:59:59.464124  214758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:59:59.775223  214758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:59:59.775251  214758 machine.go:97] duration metric: took 1.039348463s to provisionDockerMachine
	I1216 04:59:59.775266  214758 start.go:293] postStartSetup for "pause-484421" (driver="docker")
	I1216 04:59:59.775277  214758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:59:59.775346  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:59:59.775381  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.792773  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 04:59:59.885498  214758 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:59:59.889135  214758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:59:59.889165  214758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:59:59.889175  214758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 04:59:59.889224  214758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 04:59:59.889323  214758 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 04:59:59.889443  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:59:59.896752  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 04:59:59.913354  214758 start.go:296] duration metric: took 138.074334ms for postStartSetup
	I1216 04:59:59.913413  214758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:59:59.913448  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 04:59:59.931107  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 05:00:00.019568  214758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:00:00.024476  214758 fix.go:56] duration metric: took 1.308216106s for fixHost
	I1216 05:00:00.024506  214758 start.go:83] releasing machines lock for "pause-484421", held for 1.30826709s
	I1216 05:00:00.024569  214758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484421
	I1216 05:00:00.042385  214758 ssh_runner.go:195] Run: cat /version.json
	I1216 05:00:00.042429  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 05:00:00.042481  214758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:00:00.042548  214758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484421
	I1216 05:00:00.059827  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 05:00:00.060126  214758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/pause-484421/id_rsa Username:docker}
	I1216 05:00:00.199335  214758 ssh_runner.go:195] Run: systemctl --version
	I1216 05:00:00.206614  214758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:00:00.240521  214758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:00:00.245629  214758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:00:00.245687  214758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:00:00.253668  214758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:00:00.253696  214758 start.go:496] detecting cgroup driver to use...
	I1216 05:00:00.253726  214758 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:00:00.253778  214758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:00:00.267426  214758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:00:00.279074  214758 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:00:00.279133  214758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:00:00.292874  214758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:00:00.304929  214758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:00:00.434165  214758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:00:00.563109  214758 docker.go:234] disabling docker service ...
	I1216 05:00:00.563174  214758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:00:00.578112  214758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:00:00.592535  214758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:00:00.722460  214758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:00:00.857471  214758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:00:00.870548  214758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:00:00.887248  214758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:00:00.887309  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.904489  214758 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:00:00.904558  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.914005  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.923639  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.933411  214758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:00:00.941672  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.951191  214758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.960109  214758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:00.969640  214758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:00:00.977545  214758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:00:00.985110  214758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:00:01.093762  214758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:00:02.150039  214758 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.05621738s)
	I1216 05:00:02.150075  214758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:00:02.150130  214758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:00:02.154274  214758 start.go:564] Will wait 60s for crictl version
	I1216 05:00:02.154334  214758 ssh_runner.go:195] Run: which crictl
	I1216 05:00:02.157833  214758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:00:02.182669  214758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:00:02.182771  214758 ssh_runner.go:195] Run: crio --version
	I1216 05:00:02.210979  214758 ssh_runner.go:195] Run: crio --version
	I1216 05:00:02.244494  214758 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 04:59:58.608446  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 04:59:58.608484  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:02.246071  214758 cli_runner.go:164] Run: docker network inspect pause-484421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:00:02.262859  214758 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:00:02.267076  214758 kubeadm.go:884] updating cluster {Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:00:02.267213  214758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:00:02.267260  214758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:00:02.297131  214758 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:00:02.297152  214758 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:00:02.297208  214758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:00:02.323135  214758 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:00:02.323161  214758 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:00:02.323170  214758 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1216 05:00:02.323281  214758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-484421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:00:02.323365  214758 ssh_runner.go:195] Run: crio config
	I1216 05:00:02.369739  214758 cni.go:84] Creating CNI manager for ""
	I1216 05:00:02.369766  214758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:00:02.369786  214758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:00:02.369817  214758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-484421 NodeName:pause-484421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:00:02.370013  214758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-484421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:00:02.370111  214758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:00:02.378462  214758 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:00:02.378533  214758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:00:02.386530  214758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 05:00:02.399684  214758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:00:02.413623  214758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1216 05:00:02.428170  214758 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:00:02.432555  214758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:00:02.578949  214758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:00:02.593201  214758 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421 for IP: 192.168.85.2
	I1216 05:00:02.593228  214758 certs.go:195] generating shared ca certs ...
	I1216 05:00:02.593249  214758 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:00:02.593436  214758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:00:02.593497  214758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:00:02.593511  214758 certs.go:257] generating profile certs ...
	I1216 05:00:02.593649  214758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.key
	I1216 05:00:02.593743  214758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/apiserver.key.29cd6c06
	I1216 05:00:02.593804  214758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/proxy-client.key
	I1216 05:00:02.593970  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:00:02.594029  214758 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:00:02.594044  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:00:02.594084  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:00:02.594124  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:00:02.594162  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:00:02.594236  214758 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:00:02.594935  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:00:02.614371  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:00:02.632629  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:00:02.650795  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:00:02.668408  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:00:02.686090  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:00:02.703270  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:00:02.720786  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:00:02.737906  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:00:02.755428  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:00:02.773930  214758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:00:02.791706  214758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:00:02.804954  214758 ssh_runner.go:195] Run: openssl version
	I1216 05:00:02.811200  214758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.818518  214758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:00:02.826110  214758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.829902  214758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.829965  214758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:00:02.865796  214758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:00:02.873696  214758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.881541  214758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:00:02.889515  214758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.893246  214758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.893290  214758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:00:02.928055  214758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:00:02.936281  214758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:00:02.943858  214758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:00:02.952283  214758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:00:02.956089  214758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:00:02.956149  214758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:00:03.004414  214758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:00:03.012665  214758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:00:03.016655  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:00:03.053746  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:00:03.090083  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:00:03.127767  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:00:03.171873  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:00:03.206867  214758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:00:03.245529  214758 kubeadm.go:401] StartCluster: {Name:pause-484421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:00:03.245632  214758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:00:03.245696  214758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:00:03.273391  214758 cri.go:89] found id: "d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a"
	I1216 05:00:03.273413  214758 cri.go:89] found id: "968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba"
	I1216 05:00:03.273417  214758 cri.go:89] found id: "d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a"
	I1216 05:00:03.273420  214758 cri.go:89] found id: "ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764"
	I1216 05:00:03.273423  214758 cri.go:89] found id: "cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c"
	I1216 05:00:03.273426  214758 cri.go:89] found id: "58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18"
	I1216 05:00:03.273429  214758 cri.go:89] found id: "458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc"
	I1216 05:00:03.273432  214758 cri.go:89] found id: ""
	I1216 05:00:03.273469  214758 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:00:03.285620  214758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:00:03Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:00:03.285685  214758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:00:03.293628  214758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:00:03.293649  214758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:00:03.293701  214758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:00:03.300963  214758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:00:03.301762  214758 kubeconfig.go:125] found "pause-484421" server: "https://192.168.85.2:8443"
	I1216 05:00:03.302899  214758 kapi.go:59] client config for pause-484421: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.key", CAFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:00:03.303318  214758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 05:00:03.303338  214758 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 05:00:03.303351  214758 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 05:00:03.303358  214758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 05:00:03.303362  214758 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 05:00:03.303672  214758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:00:03.311091  214758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 05:00:03.311123  214758 kubeadm.go:602] duration metric: took 17.469509ms to restartPrimaryControlPlane
	I1216 05:00:03.311132  214758 kubeadm.go:403] duration metric: took 65.612349ms to StartCluster
	I1216 05:00:03.311147  214758 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:00:03.311218  214758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:00:03.312300  214758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:00:03.349174  214758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:00:03.349299  214758 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:00:03.349469  214758 config.go:182] Loaded profile config "pause-484421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:00:03.376865  214758 out.go:179] * Verifying Kubernetes components...
	I1216 05:00:03.376859  214758 out.go:179] * Enabled addons: 
	I1216 05:00:00.305302  212055 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:00:00.305462  212055 start.go:159] libmachine.API.Create for "missing-upgrade-808405" (driver="docker")
	I1216 05:00:00.305490  212055 client.go:173] LocalClient.Create starting
	I1216 05:00:00.305593  212055 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:00:00.305636  212055 main.go:143] libmachine: Decoding PEM data...
	I1216 05:00:00.305653  212055 main.go:143] libmachine: Parsing certificate...
	I1216 05:00:00.305718  212055 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:00:00.305736  212055 main.go:143] libmachine: Decoding PEM data...
	I1216 05:00:00.305748  212055 main.go:143] libmachine: Parsing certificate...
	I1216 05:00:00.306004  212055 cli_runner.go:164] Run: docker network inspect missing-upgrade-808405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:00:00.322836  212055 cli_runner.go:211] docker network inspect missing-upgrade-808405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:00:00.322902  212055 network_create.go:284] running [docker network inspect missing-upgrade-808405] to gather additional debugging logs...
	I1216 05:00:00.322924  212055 cli_runner.go:164] Run: docker network inspect missing-upgrade-808405
	W1216 05:00:00.343107  212055 cli_runner.go:211] docker network inspect missing-upgrade-808405 returned with exit code 1
	I1216 05:00:00.343182  212055 network_create.go:287] error running [docker network inspect missing-upgrade-808405]: docker network inspect missing-upgrade-808405: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-808405 not found
	I1216 05:00:00.343246  212055 network_create.go:289] output of [docker network inspect missing-upgrade-808405]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-808405 not found
	
	** /stderr **
	I1216 05:00:00.343392  212055 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:00:00.364590  212055 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:00:00.365338  212055 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:00:00.366142  212055 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:00:00.367092  212055 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f90356ecb2b1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:f4:5a:57:87:23} reservation:<nil>}
	I1216 05:00:00.367872  212055 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-db2b418b51b2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:1a:e8:f9:ed:29:d6} reservation:<nil>}
	I1216 05:00:00.369002  212055 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020a9930}
	I1216 05:00:00.369057  212055 network_create.go:124] attempt to create docker network missing-upgrade-808405 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 05:00:00.369120  212055 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-808405 missing-upgrade-808405
	I1216 05:00:00.420983  212055 network_create.go:108] docker network missing-upgrade-808405 192.168.94.0/24 created
	I1216 05:00:00.421041  212055 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-808405" container
	I1216 05:00:00.421104  212055 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:00:00.439923  212055 cli_runner.go:164] Run: docker volume create missing-upgrade-808405 --label name.minikube.sigs.k8s.io=missing-upgrade-808405 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:00:00.456401  212055 oci.go:103] Successfully created a docker volume missing-upgrade-808405
	I1216 05:00:00.456492  212055 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-808405-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-808405 --entrypoint /usr/bin/test -v missing-upgrade-808405:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1216 05:00:00.744288  212055 oci.go:107] Successfully prepared a docker volume missing-upgrade-808405
	I1216 05:00:00.744357  212055 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 05:00:00.744374  212055 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:00:00.744447  212055 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-808405:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:00:03.423164  214758 addons.go:530] duration metric: took 73.876988ms for enable addons: enabled=[]
	I1216 05:00:03.423255  214758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:00:03.534994  214758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:00:03.549283  214758 node_ready.go:35] waiting up to 6m0s for node "pause-484421" to be "Ready" ...
	I1216 05:00:03.556778  214758 node_ready.go:49] node "pause-484421" is "Ready"
	I1216 05:00:03.556804  214758 node_ready.go:38] duration metric: took 7.480978ms for node "pause-484421" to be "Ready" ...
	I1216 05:00:03.556817  214758 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:00:03.556861  214758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:00:03.568869  214758 api_server.go:72] duration metric: took 219.637604ms to wait for apiserver process to appear ...
	I1216 05:00:03.568901  214758 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:00:03.568921  214758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:00:03.572858  214758 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:00:03.573799  214758 api_server.go:141] control plane version: v1.34.2
	I1216 05:00:03.573823  214758 api_server.go:131] duration metric: took 4.91514ms to wait for apiserver health ...
	I1216 05:00:03.573835  214758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:00:03.577334  214758 system_pods.go:59] 7 kube-system pods found
	I1216 05:00:03.577370  214758 system_pods.go:61] "coredns-66bc5c9577-b27l5" [bb106fc4-4ab0-4e07-bed9-6af94a348f88] Running
	I1216 05:00:03.577379  214758 system_pods.go:61] "etcd-pause-484421" [8bdf54c9-6a67-415b-ae16-a0640e2d1d8b] Running
	I1216 05:00:03.577384  214758 system_pods.go:61] "kindnet-tmm6g" [e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1] Running
	I1216 05:00:03.577390  214758 system_pods.go:61] "kube-apiserver-pause-484421" [14dc9352-7c85-4698-acdb-1f77b2fddd8b] Running
	I1216 05:00:03.577395  214758 system_pods.go:61] "kube-controller-manager-pause-484421" [ab6d7dd2-de5c-4549-835f-927c8964672b] Running
	I1216 05:00:03.577401  214758 system_pods.go:61] "kube-proxy-jlk4q" [f8ef9bea-11c1-4531-973c-debd3fcc0b5f] Running
	I1216 05:00:03.577408  214758 system_pods.go:61] "kube-scheduler-pause-484421" [f4182356-68e1-4dbb-a344-2a4fd652b8c8] Running
	I1216 05:00:03.577419  214758 system_pods.go:74] duration metric: took 3.576816ms to wait for pod list to return data ...
	I1216 05:00:03.577432  214758 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:00:03.579342  214758 default_sa.go:45] found service account: "default"
	I1216 05:00:03.579362  214758 default_sa.go:55] duration metric: took 1.92427ms for default service account to be created ...
	I1216 05:00:03.579372  214758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:00:03.581650  214758 system_pods.go:86] 7 kube-system pods found
	I1216 05:00:03.581671  214758 system_pods.go:89] "coredns-66bc5c9577-b27l5" [bb106fc4-4ab0-4e07-bed9-6af94a348f88] Running
	I1216 05:00:03.581676  214758 system_pods.go:89] "etcd-pause-484421" [8bdf54c9-6a67-415b-ae16-a0640e2d1d8b] Running
	I1216 05:00:03.581681  214758 system_pods.go:89] "kindnet-tmm6g" [e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1] Running
	I1216 05:00:03.581686  214758 system_pods.go:89] "kube-apiserver-pause-484421" [14dc9352-7c85-4698-acdb-1f77b2fddd8b] Running
	I1216 05:00:03.581692  214758 system_pods.go:89] "kube-controller-manager-pause-484421" [ab6d7dd2-de5c-4549-835f-927c8964672b] Running
	I1216 05:00:03.581696  214758 system_pods.go:89] "kube-proxy-jlk4q" [f8ef9bea-11c1-4531-973c-debd3fcc0b5f] Running
	I1216 05:00:03.581701  214758 system_pods.go:89] "kube-scheduler-pause-484421" [f4182356-68e1-4dbb-a344-2a4fd652b8c8] Running
	I1216 05:00:03.581712  214758 system_pods.go:126] duration metric: took 2.333298ms to wait for k8s-apps to be running ...
	I1216 05:00:03.581724  214758 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:00:03.581765  214758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:00:03.594264  214758 system_svc.go:56] duration metric: took 12.53497ms WaitForService to wait for kubelet
	I1216 05:00:03.594286  214758 kubeadm.go:587] duration metric: took 245.062744ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:00:03.594312  214758 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:00:03.596992  214758 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:00:03.597053  214758 node_conditions.go:123] node cpu capacity is 8
	I1216 05:00:03.597070  214758 node_conditions.go:105] duration metric: took 2.753664ms to run NodePressure ...
	I1216 05:00:03.597086  214758 start.go:242] waiting for startup goroutines ...
	I1216 05:00:03.597100  214758 start.go:247] waiting for cluster config update ...
	I1216 05:00:03.597111  214758 start.go:256] writing updated cluster config ...
	I1216 05:00:03.599537  214758 ssh_runner.go:195] Run: rm -f paused
	I1216 05:00:03.603523  214758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:00:03.604289  214758 kapi.go:59] client config for pause-484421: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/profiles/pause-484421/client.key", CAFile:"/home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:00:03.606853  214758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b27l5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.610749  214758 pod_ready.go:94] pod "coredns-66bc5c9577-b27l5" is "Ready"
	I1216 05:00:03.610768  214758 pod_ready.go:86] duration metric: took 3.897182ms for pod "coredns-66bc5c9577-b27l5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.612494  214758 pod_ready.go:83] waiting for pod "etcd-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.615806  214758 pod_ready.go:94] pod "etcd-pause-484421" is "Ready"
	I1216 05:00:03.615822  214758 pod_ready.go:86] duration metric: took 3.310556ms for pod "etcd-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.617576  214758 pod_ready.go:83] waiting for pod "kube-apiserver-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.621042  214758 pod_ready.go:94] pod "kube-apiserver-pause-484421" is "Ready"
	I1216 05:00:03.621058  214758 pod_ready.go:86] duration metric: took 3.464745ms for pod "kube-apiserver-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:03.622566  214758 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.007416  214758 pod_ready.go:94] pod "kube-controller-manager-pause-484421" is "Ready"
	I1216 05:00:04.007449  214758 pod_ready.go:86] duration metric: took 384.86328ms for pod "kube-controller-manager-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.207312  214758 pod_ready.go:83] waiting for pod "kube-proxy-jlk4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.607513  214758 pod_ready.go:94] pod "kube-proxy-jlk4q" is "Ready"
	I1216 05:00:04.607543  214758 pod_ready.go:86] duration metric: took 400.205343ms for pod "kube-proxy-jlk4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:04.807898  214758 pod_ready.go:83] waiting for pod "kube-scheduler-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:05.207446  214758 pod_ready.go:94] pod "kube-scheduler-pause-484421" is "Ready"
	I1216 05:00:05.207476  214758 pod_ready.go:86] duration metric: took 399.552255ms for pod "kube-scheduler-pause-484421" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:00:05.207488  214758 pod_ready.go:40] duration metric: took 1.603940339s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:00:05.259310  214758 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:00:05.261737  214758 out.go:179] * Done! kubectl is now configured to use "pause-484421" cluster and "default" namespace by default
	I1216 05:00:03.438508  213062 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 05:00:03.438591  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:00:03.609364  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 05:00:03.609402  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:04.411666  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:37718->192.168.103.2:8443: read: connection reset by peer
	I1216 05:00:04.411725  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:04.412092  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:04.605472  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:04.605879  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:05.105172  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:05.105621  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:05.605098  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:05.605536  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:06.105140  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:06.105566  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:06.605094  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:06.605480  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:07.105114  204381 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:00:07.105464  204381 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1216 05:00:04.504404  212055 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-808405:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (3.759907171s)
	I1216 05:00:04.504444  212055 kic.go:203] duration metric: took 3.760069723s to extract preloaded images to volume ...
	W1216 05:00:04.504542  212055 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:00:04.504638  212055 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:00:04.504694  212055 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:00:04.564403  212055 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-808405 --name missing-upgrade-808405 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-808405 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-808405 --network missing-upgrade-808405 --ip 192.168.94.2 --volume missing-upgrade-808405:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1216 05:00:04.853343  212055 cli_runner.go:164] Run: docker container inspect missing-upgrade-808405 --format={{.State.Running}}
	I1216 05:00:04.872535  212055 cli_runner.go:164] Run: docker container inspect missing-upgrade-808405 --format={{.State.Status}}
	I1216 05:00:04.893547  212055 cli_runner.go:164] Run: docker exec missing-upgrade-808405 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:00:04.944991  212055 oci.go:144] the created container "missing-upgrade-808405" has a running status.
	I1216 05:00:04.945038  212055 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa...
	I1216 05:00:05.106578  212055 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:00:05.138943  212055 cli_runner.go:164] Run: docker container inspect missing-upgrade-808405 --format={{.State.Status}}
	I1216 05:00:05.163449  212055 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:00:05.163517  212055 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-808405 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:00:05.217064  212055 cli_runner.go:164] Run: docker container inspect missing-upgrade-808405 --format={{.State.Status}}
	I1216 05:00:05.237825  212055 machine.go:94] provisionDockerMachine start ...
	I1216 05:00:05.237920  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:05.260121  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:05.260440  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:05.260461  212055 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:00:05.392266  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-808405
	
	I1216 05:00:05.392303  212055 ubuntu.go:182] provisioning hostname "missing-upgrade-808405"
	I1216 05:00:05.392368  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:05.415404  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:05.415718  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:05.415743  212055 main.go:143] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-808405 && echo "missing-upgrade-808405" | sudo tee /etc/hostname
	I1216 05:00:05.560873  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-808405
	
	I1216 05:00:05.560991  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:05.580832  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:05.581134  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:05.581165  212055 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-808405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-808405/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-808405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:00:05.713508  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:00:05.713537  212055 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:00:05.713555  212055 ubuntu.go:190] setting up certificates
	I1216 05:00:05.713565  212055 provision.go:84] configureAuth start
	I1216 05:00:05.713615  212055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-808405
	I1216 05:00:05.732466  212055 provision.go:143] copyHostCerts
	I1216 05:00:05.732529  212055 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:00:05.732540  212055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:00:05.732627  212055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:00:05.732726  212055 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:00:05.732735  212055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:00:05.732763  212055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:00:05.732834  212055 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:00:05.732845  212055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:00:05.732879  212055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:00:05.732951  212055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-808405 san=[127.0.0.1 192.168.94.2 localhost minikube missing-upgrade-808405]
	I1216 05:00:05.823076  212055 provision.go:177] copyRemoteCerts
	I1216 05:00:05.823129  212055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:00:05.823162  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:05.840805  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	I1216 05:00:05.933541  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 05:00:05.959775  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:00:05.984427  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:00:06.008470  212055 provision.go:87] duration metric: took 294.882057ms to configureAuth
	I1216 05:00:06.008497  212055 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:00:06.008647  212055 config.go:182] Loaded profile config "missing-upgrade-808405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 05:00:06.008765  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:06.026604  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:06.026875  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:06.026900  212055 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:00:06.294149  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:00:06.294171  212055 machine.go:97] duration metric: took 1.056327398s to provisionDockerMachine
	I1216 05:00:06.294180  212055 client.go:176] duration metric: took 5.988682235s to LocalClient.Create
	I1216 05:00:06.294206  212055 start.go:167] duration metric: took 5.988744371s to libmachine.API.Create "missing-upgrade-808405"
	I1216 05:00:06.294213  212055 start.go:293] postStartSetup for "missing-upgrade-808405" (driver="docker")
	I1216 05:00:06.294224  212055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:00:06.294284  212055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:00:06.294321  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:06.312671  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	I1216 05:00:06.406038  212055 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:00:06.409451  212055 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:00:06.409477  212055 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 05:00:06.409485  212055 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 05:00:06.409490  212055 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 05:00:06.409499  212055 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:00:06.409550  212055 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:00:06.409645  212055 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:00:06.409761  212055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:00:06.418867  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:00:06.446287  212055 start.go:296] duration metric: took 152.059333ms for postStartSetup
	I1216 05:00:06.446638  212055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-808405
	I1216 05:00:06.464346  212055 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/missing-upgrade-808405/config.json ...
	I1216 05:00:06.464591  212055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:00:06.464642  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:06.481860  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	I1216 05:00:06.570550  212055 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:00:06.575094  212055 start.go:128] duration metric: took 6.271904887s to createHost
	I1216 05:00:06.575201  212055 cli_runner.go:164] Run: docker container inspect missing-upgrade-808405 --format={{.State.Status}}
	W1216 05:00:06.592802  212055 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:00:06.592828  212055 machine.go:94] provisionDockerMachine start ...
	I1216 05:00:06.592899  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:06.611193  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:06.611404  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:06.611415  212055 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:00:06.737918  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-808405
	
	I1216 05:00:06.737956  212055 ubuntu.go:182] provisioning hostname "missing-upgrade-808405"
	I1216 05:00:06.738034  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:06.759427  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:06.759693  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:06.759707  212055 main.go:143] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-808405 && echo "missing-upgrade-808405" | sudo tee /etc/hostname
	I1216 05:00:06.902749  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: missing-upgrade-808405
	
	I1216 05:00:06.902830  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:06.921391  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:06.921735  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:06.921769  212055 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-808405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-808405/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-808405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:00:07.049596  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:00:07.049631  212055 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:00:07.049657  212055 ubuntu.go:190] setting up certificates
	I1216 05:00:07.049669  212055 provision.go:84] configureAuth start
	I1216 05:00:07.049740  212055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-808405
	I1216 05:00:07.067686  212055 provision.go:143] copyHostCerts
	I1216 05:00:07.067757  212055 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:00:07.067771  212055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:00:07.067831  212055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:00:07.067931  212055 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:00:07.067943  212055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:00:07.067978  212055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:00:07.068104  212055 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:00:07.068115  212055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:00:07.068148  212055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:00:07.068234  212055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-808405 san=[127.0.0.1 192.168.94.2 localhost minikube missing-upgrade-808405]
	I1216 05:00:07.191857  212055 provision.go:177] copyRemoteCerts
	I1216 05:00:07.191911  212055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:00:07.191950  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:07.210005  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	I1216 05:00:07.303041  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:00:07.327498  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 05:00:07.352054  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:00:07.376743  212055 provision.go:87] duration metric: took 327.056945ms to configureAuth
	I1216 05:00:07.376771  212055 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:00:07.376947  212055 config.go:182] Loaded profile config "missing-upgrade-808405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 05:00:07.377114  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:07.394928  212055 main.go:143] libmachine: Using SSH client type: native
	I1216 05:00:07.395181  212055 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33021 <nil> <nil>}
	I1216 05:00:07.395197  212055 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:00:07.647642  212055 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:00:07.647668  212055 machine.go:97] duration metric: took 1.054832836s to provisionDockerMachine
	I1216 05:00:07.647683  212055 start.go:293] postStartSetup for "missing-upgrade-808405" (driver="docker")
	I1216 05:00:07.647696  212055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:00:07.647753  212055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:00:07.647805  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:07.667434  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	I1216 05:00:07.764196  212055 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:00:07.767818  212055 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:00:07.767844  212055 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 05:00:07.767851  212055 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 05:00:07.767857  212055 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 05:00:07.767868  212055 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:00:07.767911  212055 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:00:07.768002  212055 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:00:07.768152  212055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:00:07.777957  212055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:00:07.804868  212055 start.go:296] duration metric: took 157.170458ms for postStartSetup
	I1216 05:00:07.804953  212055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:00:07.805002  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:07.824380  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	I1216 05:00:07.915294  212055 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:00:07.919592  212055 fix.go:56] duration metric: took 24.309164445s for fixHost
	I1216 05:00:07.919617  212055 start.go:83] releasing machines lock for "missing-upgrade-808405", held for 24.309226077s
	I1216 05:00:07.919694  212055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-808405
	I1216 05:00:07.936737  212055 ssh_runner.go:195] Run: cat /version.json
	I1216 05:00:07.936793  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:07.936835  212055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:00:07.936907  212055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-808405
	I1216 05:00:07.954879  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	I1216 05:00:07.955705  212055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33021 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/missing-upgrade-808405/id_rsa Username:docker}
	W1216 05:00:08.129302  212055 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1216 05:00:08.129379  212055 ssh_runner.go:195] Run: systemctl --version
	I1216 05:00:08.134464  212055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:00:08.280548  212055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 05:00:08.285386  212055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:00:08.307160  212055 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1216 05:00:08.307246  212055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:00:08.336832  212055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1216 05:00:08.336858  212055 start.go:496] detecting cgroup driver to use...
	I1216 05:00:08.336887  212055 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:00:08.336928  212055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:00:08.352786  212055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:00:08.364419  212055 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:00:08.364473  212055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:00:08.378852  212055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:00:08.394636  212055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:00:08.466090  212055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:00:08.544208  212055 docker.go:234] disabling docker service ...
	I1216 05:00:08.544272  212055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:00:08.564343  212055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:00:08.576596  212055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:00:08.647563  212055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:00:08.791921  212055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:00:08.804152  212055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:00:08.821454  212055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 05:00:08.821516  212055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:08.834454  212055 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:00:08.834507  212055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:08.844861  212055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:08.855175  212055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:08.865819  212055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:00:08.876488  212055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:08.887007  212055 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:08.904475  212055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:00:08.915603  212055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:00:08.925643  212055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:00:08.934749  212055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:00:09.059804  212055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:00:09.165538  212055 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:00:09.165623  212055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:00:09.169849  212055 start.go:564] Will wait 60s for crictl version
	I1216 05:00:09.169908  212055 ssh_runner.go:195] Run: which crictl
	I1216 05:00:09.173607  212055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 05:00:09.210834  212055 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1216 05:00:09.210951  212055 ssh_runner.go:195] Run: crio --version
	I1216 05:00:09.245216  212055 ssh_runner.go:195] Run: crio --version
	I1216 05:00:09.281443  212055 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	
	
	==> CRI-O <==
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052116176Z" level=info msg="RDT not available in the host system"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052128778Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052900505Z" level=info msg="Conmon does support the --sync option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052915671Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.052931557Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.05361757Z" level=info msg="Conmon does support the --sync option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.053638432Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.057635631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.057659142Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.058424242Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.058922496Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.058984623Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.144664802Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-b27l5 Namespace:kube-system ID:c6bf05bd0501c624db05e187ea29b5ed88478aeede5b5098b5c73b8cfa5523e2 UID:bb106fc4-4ab0-4e07-bed9-6af94a348f88 NetNS:/var/run/netns/00bab8b3-6cf1-472f-8a10-759235ef7fc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000146498}] Aliases:map[]}"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.144902282Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-b27l5 for CNI network kindnet (type=ptp)"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145480707Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.14551377Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145580378Z" level=info msg="Create NRI interface"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145694618Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145707778Z" level=info msg="runtime interface created"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145718442Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.14572344Z" level=info msg="runtime interface starting up..."
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.14572857Z" level=info msg="starting plugins..."
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.145739436Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 05:00:02 pause-484421 crio[2173]: time="2025-12-16T05:00:02.146030521Z" level=info msg="No systemd watchdog enabled"
	Dec 16 05:00:02 pause-484421 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d9defad5f5e81       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago       Running             coredns                   0                   c6bf05bd0501c       coredns-66bc5c9577-b27l5               kube-system
	968f87f2624b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   56 seconds ago       Running             kindnet-cni               0                   e2fb888bcbcce       kindnet-tmm6g                          kube-system
	d91dd18a4d06c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   56 seconds ago       Running             kube-proxy                0                   9d5e4fbfce059       kube-proxy-jlk4q                       kube-system
	ec7e20b14ddeb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   cc75c981cb5ca       etcd-pause-484421                      kube-system
	cb8a304fb27d8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   6ec9282222931       kube-controller-manager-pause-484421   kube-system
	58e9c0db8756d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   c2248d20354a1       kube-scheduler-pause-484421            kube-system
	458bca56d150f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   5e0398d12ead1       kube-apiserver-pause-484421            kube-system
	
	
	==> coredns [d9defad5f5e81a2f4791a6552668e65d334fce9ddba96c6a67b8671f5d537f5a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43188 - 25060 "HINFO IN 454759291112319387.1071349700529759130. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.020286519s
	
	
	==> describe nodes <==
	Name:               pause-484421
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-484421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=pause-484421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T04_59_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 04:59:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-484421
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 04:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 04:59:58 +0000   Tue, 16 Dec 2025 04:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-484421
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                e2c99160-5189-445d-ad90-cc9a32c3ca20
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b27l5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     57s
	  kube-system                 etcd-pause-484421                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-tmm6g                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-pause-484421             250m (3%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-pause-484421    200m (2%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-jlk4q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-pause-484421             100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node pause-484421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node pause-484421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node pause-484421 status is now: NodeHasSufficientPID
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s                kubelet          Node pause-484421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s                kubelet          Node pause-484421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s                kubelet          Node pause-484421 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node pause-484421 event: Registered Node pause-484421 in Controller
	  Normal  NodeReady                16s                kubelet          Node pause-484421 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [ec7e20b14ddebcfe8a170e905b9398b74f569c252da3551cef9f2dad88d54764] <==
	{"level":"warn","ts":"2025-12-16T04:59:04.192059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.202173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.211534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.222050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.230758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.245285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.279465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.285532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:04.363113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:59:13.375916Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.441792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-12-16T04:59:13.376061Z","caller":"traceutil/trace.go:172","msg":"trace[99070168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:360; }","duration":"131.623625ms","start":"2025-12-16T04:59:13.244422Z","end":"2025-12-16T04:59:13.376046Z","steps":["trace[99070168] 'agreement among raft nodes before linearized reading'  (duration: 49.281422ms)","trace[99070168] 'range keys from in-memory index tree'  (duration: 82.073565ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:59:13.375967Z","caller":"traceutil/trace.go:172","msg":"trace[1910788047] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"151.59471ms","start":"2025-12-16T04:59:13.224353Z","end":"2025-12-16T04:59:13.375948Z","steps":["trace[1910788047] 'process raft request'  (duration: 151.551386ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:13.375981Z","caller":"traceutil/trace.go:172","msg":"trace[1475198763] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"151.690557ms","start":"2025-12-16T04:59:13.224271Z","end":"2025-12-16T04:59:13.375962Z","steps":["trace[1475198763] 'process raft request'  (duration: 69.488272ms)","trace[1475198763] 'compare'  (duration: 82.029882ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:59:16.945855Z","caller":"traceutil/trace.go:172","msg":"trace[771357245] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"135.148202ms","start":"2025-12-16T04:59:16.810688Z","end":"2025-12-16T04:59:16.945836Z","steps":["trace[771357245] 'process raft request'  (duration: 134.362102ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:55.054711Z","caller":"traceutil/trace.go:172","msg":"trace[542602384] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"169.322853ms","start":"2025-12-16T04:59:54.885366Z","end":"2025-12-16T04:59:55.054689Z","steps":["trace[542602384] 'process raft request'  (duration: 169.200287ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:55.271363Z","caller":"traceutil/trace.go:172","msg":"trace[1152143919] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"208.880611ms","start":"2025-12-16T04:59:55.062460Z","end":"2025-12-16T04:59:55.271340Z","steps":["trace[1152143919] 'process raft request'  (duration: 142.240207ms)","trace[1152143919] 'compare'  (duration: 66.535419ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:59:55.545342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.162112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-484421\" limit:1 ","response":"range_response_count:1 size:5412"}
	{"level":"info","ts":"2025-12-16T04:59:55.545408Z","caller":"traceutil/trace.go:172","msg":"trace[542468568] range","detail":"{range_begin:/registry/minions/pause-484421; range_end:; response_count:1; response_revision:429; }","duration":"143.233845ms","start":"2025-12-16T04:59:55.402157Z","end":"2025-12-16T04:59:55.545391Z","steps":["trace[542468568] 'agreement among raft nodes before linearized reading'  (duration: 80.696175ms)","trace[542468568] 'range keys from in-memory index tree'  (duration: 62.423681ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:59:55.545404Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.50687ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:59:55.545425Z","caller":"traceutil/trace.go:172","msg":"trace[564624687] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"270.081591ms","start":"2025-12-16T04:59:55.275330Z","end":"2025-12-16T04:59:55.545412Z","steps":["trace[564624687] 'process raft request'  (duration: 207.627992ms)","trace[564624687] 'compare'  (duration: 62.373248ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:59:55.545452Z","caller":"traceutil/trace.go:172","msg":"trace[1530607488] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:429; }","duration":"113.564521ms","start":"2025-12-16T04:59:55.431879Z","end":"2025-12-16T04:59:55.545443Z","steps":["trace[1530607488] 'range keys from in-memory index tree'  (duration: 113.492566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:56.730445Z","caller":"traceutil/trace.go:172","msg":"trace[1448634030] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:455; }","duration":"179.751872ms","start":"2025-12-16T04:59:56.550653Z","end":"2025-12-16T04:59:56.730405Z","steps":["trace[1448634030] 'read index received'  (duration: 179.744243ms)","trace[1448634030] 'applied index is now lower than readState.Index'  (duration: 6.658µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:59:56.730542Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.872036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:59:56.730575Z","caller":"traceutil/trace.go:172","msg":"trace[364574620] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"209.611303ms","start":"2025-12-16T04:59:56.520953Z","end":"2025-12-16T04:59:56.730565Z","steps":["trace[364574620] 'process raft request'  (duration: 209.50277ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:59:56.730590Z","caller":"traceutil/trace.go:172","msg":"trace[1530804081] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:436; }","duration":"179.93796ms","start":"2025-12-16T04:59:56.550643Z","end":"2025-12-16T04:59:56.730581Z","steps":["trace[1530804081] 'agreement among raft nodes before linearized reading'  (duration: 179.842664ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:00:10 up 42 min,  0 user,  load average: 4.77, 2.53, 1.64
	Linux pause-484421 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [968f87f2624b3637b57b139c9e9e25f16f3f234a64e0731dcb862238901cb6ba] <==
	I1216 04:59:14.221966       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 04:59:14.222242       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 04:59:14.222430       1 main.go:148] setting mtu 1500 for CNI 
	I1216 04:59:14.222451       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 04:59:14.222479       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T04:59:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 04:59:14.422460       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 04:59:14.422500       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 04:59:14.422513       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 04:59:14.422793       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 04:59:44.423287       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 04:59:44.423287       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 04:59:44.423290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1216 04:59:44.423290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1216 04:59:45.923360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 04:59:45.923405       1 metrics.go:72] Registering metrics
	I1216 04:59:45.923483       1 controller.go:711] "Syncing nftables rules"
	I1216 04:59:54.429541       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 04:59:54.429614       1 main.go:301] handling current node
	I1216 05:00:04.427102       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:00:04.427151       1 main.go:301] handling current node
	
	
	==> kube-apiserver [458bca56d150f434d9429af83aea10d61e55e77847c80da62658142b3d777fcc] <==
	I1216 04:59:05.239124       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 04:59:05.239231       1 policy_source.go:240] refreshing policies
	I1216 04:59:05.251844       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 04:59:05.262417       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:05.268522       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 04:59:05.274390       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1216 04:59:05.328580       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:05.328671       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 04:59:06.031142       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 04:59:06.036412       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 04:59:06.036435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 04:59:06.562644       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 04:59:06.605132       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 04:59:06.739106       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 04:59:06.745326       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1216 04:59:06.746410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 04:59:06.750626       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 04:59:07.350346       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 04:59:07.517845       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 04:59:07.529924       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 04:59:07.538453       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 04:59:13.048867       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1216 04:59:13.151891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:13.157777       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 04:59:13.381936       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cb8a304fb27d828a1b7836282095b903d391000ae25636c087d8fd606b472c2c] <==
	I1216 04:59:12.347056       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 04:59:12.347066       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 04:59:12.347107       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 04:59:12.347151       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 04:59:12.347197       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 04:59:12.348877       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:59:12.348912       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 04:59:12.349081       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 04:59:12.349140       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-484421"
	I1216 04:59:12.349176       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1216 04:59:12.349397       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 04:59:12.351373       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 04:59:12.352573       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 04:59:12.352630       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 04:59:12.352673       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 04:59:12.352685       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 04:59:12.352693       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 04:59:12.354871       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 04:59:12.355112       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:59:12.359302       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 04:59:12.360295       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-484421" podCIDRs=["10.244.0.0/24"]
	I1216 04:59:12.363609       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 04:59:12.366176       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 04:59:12.373402       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:59:57.353316       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d91dd18a4d06c4328e2d20df468a15ef3f5bbe97032ee881f9ad98aad838254a] <==
	I1216 04:59:14.083363       1 server_linux.go:53] "Using iptables proxy"
	I1216 04:59:14.158816       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 04:59:14.259861       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 04:59:14.259908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 04:59:14.260054       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 04:59:14.279615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 04:59:14.279689       1 server_linux.go:132] "Using iptables Proxier"
	I1216 04:59:14.285652       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 04:59:14.286455       1 server.go:527] "Version info" version="v1.34.2"
	I1216 04:59:14.286483       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 04:59:14.288565       1 config.go:200] "Starting service config controller"
	I1216 04:59:14.288590       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 04:59:14.288603       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 04:59:14.288620       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 04:59:14.288624       1 config.go:106] "Starting endpoint slice config controller"
	I1216 04:59:14.288630       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 04:59:14.288673       1 config.go:309] "Starting node config controller"
	I1216 04:59:14.288679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 04:59:14.288685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 04:59:14.389647       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 04:59:14.389704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 04:59:14.389773       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [58e9c0db8756debadb6ff202a2543533c637024ea73b8961b533b6fadd6efd18] <==
	E1216 04:59:05.231734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:59:05.231881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:59:05.231928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:59:05.231975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 04:59:05.234142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:59:05.234293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:59:05.234355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:59:05.234399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 04:59:05.234443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:59:05.234555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 04:59:05.234626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:59:05.234693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 04:59:05.234754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:59:05.234839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:59:05.234898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 04:59:05.234962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:59:05.236539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 04:59:05.244243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 04:59:05.256079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:59:06.128527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:59:06.219548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:59:06.249142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:59:06.251044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:59:06.260446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1216 04:59:06.720631       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 04:59:13 pause-484421 kubelet[1320]: E1216 04:59:13.222812    1320 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 16 04:59:13 pause-484421 kubelet[1320]: E1216 04:59:13.222839    1320 projected.go:196] Error preparing data for projected volume kube-api-access-vt8zp for pod kube-system/kindnet-tmm6g: configmap "kube-root-ca.crt" not found
	Dec 16 04:59:13 pause-484421 kubelet[1320]: E1216 04:59:13.222911    1320 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1-kube-api-access-vt8zp podName:e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1 nodeName:}" failed. No retries permitted until 2025-12-16 04:59:13.722891489 +0000 UTC m=+6.441346801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vt8zp" (UniqueName: "kubernetes.io/projected/e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1-kube-api-access-vt8zp") pod "kindnet-tmm6g" (UID: "e58c0d97-1d1c-4ff1-96e6-f49aa4c9c5b1") : configmap "kube-root-ca.crt" not found
	Dec 16 04:59:14 pause-484421 kubelet[1320]: I1216 04:59:14.455370    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tmm6g" podStartSLOduration=1.455349206 podStartE2EDuration="1.455349206s" podCreationTimestamp="2025-12-16 04:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:59:14.441507412 +0000 UTC m=+7.159962732" watchObservedRunningTime="2025-12-16 04:59:14.455349206 +0000 UTC m=+7.173804527"
	Dec 16 04:59:14 pause-484421 kubelet[1320]: I1216 04:59:14.465298    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jlk4q" podStartSLOduration=1.465275795 podStartE2EDuration="1.465275795s" podCreationTimestamp="2025-12-16 04:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:59:14.465268334 +0000 UTC m=+7.183723654" watchObservedRunningTime="2025-12-16 04:59:14.465275795 +0000 UTC m=+7.183731116"
	Dec 16 04:59:54 pause-484421 kubelet[1320]: I1216 04:59:54.880069    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 04:59:55 pause-484421 kubelet[1320]: I1216 04:59:55.703326    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb106fc4-4ab0-4e07-bed9-6af94a348f88-config-volume\") pod \"coredns-66bc5c9577-b27l5\" (UID: \"bb106fc4-4ab0-4e07-bed9-6af94a348f88\") " pod="kube-system/coredns-66bc5c9577-b27l5"
	Dec 16 04:59:55 pause-484421 kubelet[1320]: I1216 04:59:55.703371    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqz8\" (UniqueName: \"kubernetes.io/projected/bb106fc4-4ab0-4e07-bed9-6af94a348f88-kube-api-access-clqz8\") pod \"coredns-66bc5c9577-b27l5\" (UID: \"bb106fc4-4ab0-4e07-bed9-6af94a348f88\") " pod="kube-system/coredns-66bc5c9577-b27l5"
	Dec 16 04:59:56 pause-484421 kubelet[1320]: I1216 04:59:56.732353    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b27l5" podStartSLOduration=43.73233448 podStartE2EDuration="43.73233448s" podCreationTimestamp="2025-12-16 04:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:59:56.732166223 +0000 UTC m=+49.450621543" watchObservedRunningTime="2025-12-16 04:59:56.73233448 +0000 UTC m=+49.450789799"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.393213    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393319    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393410    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393428    1320 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.393443    1320 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.493607    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.524826    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.524878    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: E1216 05:00:01.524892    1320 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.655080    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:01 pause-484421 kubelet[1320]: W1216 05:00:01.922007    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 16 05:00:02 pause-484421 kubelet[1320]: E1216 05:00:02.412180    1320 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Dec 16 05:00:05 pause-484421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:00:05 pause-484421 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:00:05 pause-484421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:05 pause-484421 systemd[1]: kubelet.service: Consumed 2.204s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484421 -n pause-484421
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484421 -n pause-484421: exit status 2 (314.663853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-484421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (552.608429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:03:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-545075 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-545075 describe deploy/metrics-server -n kube-system: exit status 1 (63.340297ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-545075 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-545075
helpers_test.go:244: (dbg) docker inspect old-k8s-version-545075:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa",
	        "Created": "2025-12-16T05:02:59.987474092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256879,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:03:00.026127911Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/hosts",
	        "LogPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa-json.log",
	        "Name": "/old-k8s-version-545075",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-545075:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-545075",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa",
	                "LowerDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-545075",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-545075/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-545075",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-545075",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-545075",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "80b5ba80dd6c5afff5eccdc81641648dda396258b2ccf355be152fd1fc557d93",
	            "SandboxKey": "/var/run/docker/netns/80b5ba80dd6c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-545075": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "333da06948413c05c3047e735767d3346609d7c3a2cd33e3c019b11d1e8453a4",
	                    "EndpointID": "cedf6a3c9e92bf3340de15ab0cc6ae87a59cacad82ac748f442d84c6f2eaa829",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6e:1c:71:af:e4:35",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-545075",
	                        "670e802935b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-545075 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-545075 logs -n 25: (1.644371296s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-548714 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo docker system info                                                                                                                                                                                                      │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo containerd config dump                                                                                                                                                                                                  │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo crio config                                                                                                                                                                                                             │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ delete  │ -p cilium-548714                                                                                                                                                                                                                              │ cilium-548714          │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:02 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075 │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p stopped-upgrade-572334                                                                                                                                                                                                                     │ stopped-upgrade-572334 │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631      │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	│ start   │ -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-123545 │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p cert-expiration-123545                                                                                                                                                                                                                     │ cert-expiration-123545 │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599     │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-545075 │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:03:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:03:49.752506  268657 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:03:49.752763  268657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:03:49.752772  268657 out.go:374] Setting ErrFile to fd 2...
	I1216 05:03:49.752776  268657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:03:49.752972  268657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:03:49.753433  268657 out.go:368] Setting JSON to false
	I1216 05:03:49.754547  268657 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2781,"bootTime":1765858649,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:03:49.754599  268657 start.go:143] virtualization: kvm guest
	I1216 05:03:49.756672  268657 out.go:179] * [embed-certs-127599] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:03:49.758049  268657 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:03:49.758054  268657 notify.go:221] Checking for updates...
	I1216 05:03:49.760304  268657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:03:49.761518  268657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:03:49.762539  268657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:03:49.763657  268657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:03:49.764705  268657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:03:49.766280  268657 config.go:182] Loaded profile config "kubernetes-upgrade-132399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:03:49.766396  268657 config.go:182] Loaded profile config "no-preload-990631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:03:49.766486  268657 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:03:49.766570  268657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:03:49.790568  268657 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:03:49.790675  268657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:03:49.847135  268657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-16 05:03:49.836817324 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:03:49.847301  268657 docker.go:319] overlay module found
	I1216 05:03:49.849123  268657 out.go:179] * Using the docker driver based on user configuration
	I1216 05:03:49.850334  268657 start.go:309] selected driver: docker
	I1216 05:03:49.850353  268657 start.go:927] validating driver "docker" against <nil>
	I1216 05:03:49.850372  268657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:03:49.851328  268657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:03:49.915130  268657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-16 05:03:49.9033424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:03:49.915351  268657 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:03:49.915630  268657 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:03:49.920520  268657 out.go:179] * Using Docker driver with root privileges
	I1216 05:03:49.921810  268657 cni.go:84] Creating CNI manager for ""
	I1216 05:03:49.921878  268657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:03:49.921889  268657 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:03:49.921965  268657 start.go:353] cluster config:
	{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:03:49.923388  268657 out.go:179] * Starting "embed-certs-127599" primary control-plane node in "embed-certs-127599" cluster
	I1216 05:03:49.924633  268657 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:03:49.925902  268657 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:03:49.927067  268657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:03:49.927113  268657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:03:49.927133  268657 cache.go:65] Caching tarball of preloaded images
	I1216 05:03:49.927159  268657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:03:49.927250  268657 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:03:49.927265  268657 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:03:49.927389  268657 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json ...
	I1216 05:03:49.927421  268657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json: {Name:mk10a04aa71466a4b63ef3cd0f9e765665bb86f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:49.949011  268657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:03:49.949048  268657 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:03:49.949067  268657 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:03:49.949100  268657 start.go:360] acquireMachinesLock for embed-certs-127599: {Name:mk782536a8f26bd7063e28ead9562fcba9d13ffe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:03:49.949216  268657 start.go:364] duration metric: took 93.829µs to acquireMachinesLock for "embed-certs-127599"
	I1216 05:03:49.949245  268657 start.go:93] Provisioning new machine with config: &{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:03:49.949321  268657 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:03:47.062287  263814 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.462711617s)
	I1216 05:03:47.062318  263814 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22141-5066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1216 05:03:47.062345  263814 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 05:03:47.062340  263814 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.462730153s)
	I1216 05:03:47.062374  263814 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 05:03:47.062392  263814 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 05:03:47.062399  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1216 05:03:48.138487  263814 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.076062198s)
	I1216 05:03:48.138518  263814 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22141-5066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1216 05:03:48.138547  263814 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 05:03:48.138605  263814 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 05:03:49.615724  263814 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.477091565s)
	I1216 05:03:49.615754  263814 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22141-5066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1216 05:03:49.615776  263814 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 05:03:49.615813  263814 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 05:03:50.501257  263814 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22141-5066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 05:03:50.501309  263814 cache_images.go:125] Successfully loaded all cached images
	I1216 05:03:50.501318  263814 cache_images.go:94] duration metric: took 9.837575754s to LoadCachedImages
	I1216 05:03:50.501334  263814 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:03:50.501472  263814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-990631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:03:50.501570  263814 ssh_runner.go:195] Run: crio config
	I1216 05:03:50.552889  263814 cni.go:84] Creating CNI manager for ""
	I1216 05:03:50.552919  263814 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:03:50.552941  263814 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:03:50.552972  263814 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-990631 NodeName:no-preload-990631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:03:50.553182  263814 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-990631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:03:50.553267  263814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:03:50.562317  263814 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1216 05:03:50.562365  263814 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:03:50.570370  263814 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1216 05:03:50.570435  263814 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22141-5066/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1216 05:03:50.570463  263814 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1216 05:03:50.570492  263814 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22141-5066/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1216 05:03:50.575275  263814 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1216 05:03:50.575308  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1216 05:03:51.393960  263814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:03:51.412103  263814 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1216 05:03:51.416920  263814 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1216 05:03:51.416957  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1216 05:03:51.558118  263814 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1216 05:03:51.566496  263814 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1216 05:03:51.566537  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1216 05:03:48.275118  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:03:48.275583  213062 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 05:03:48.275653  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:03:48.275722  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:03:48.305159  213062 cri.go:89] found id: "0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71"
	I1216 05:03:48.305180  213062 cri.go:89] found id: ""
	I1216 05:03:48.305195  213062 logs.go:282] 1 containers: [0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71]
	I1216 05:03:48.305250  213062 ssh_runner.go:195] Run: which crictl
	I1216 05:03:48.309268  213062 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:03:48.309327  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:03:48.337841  213062 cri.go:89] found id: ""
	I1216 05:03:48.337868  213062 logs.go:282] 0 containers: []
	W1216 05:03:48.337879  213062 logs.go:284] No container was found matching "etcd"
	I1216 05:03:48.337886  213062 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:03:48.337942  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:03:48.365156  213062 cri.go:89] found id: ""
	I1216 05:03:48.365178  213062 logs.go:282] 0 containers: []
	W1216 05:03:48.365193  213062 logs.go:284] No container was found matching "coredns"
	I1216 05:03:48.365199  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:03:48.365255  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:03:48.394203  213062 cri.go:89] found id: "d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54"
	I1216 05:03:48.394225  213062 cri.go:89] found id: ""
	I1216 05:03:48.394234  213062 logs.go:282] 1 containers: [d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54]
	I1216 05:03:48.394336  213062 ssh_runner.go:195] Run: which crictl
	I1216 05:03:48.399294  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:03:48.399372  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:03:48.432441  213062 cri.go:89] found id: ""
	I1216 05:03:48.432493  213062 logs.go:282] 0 containers: []
	W1216 05:03:48.432511  213062 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:48.432519  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:03:48.432566  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:03:48.461444  213062 cri.go:89] found id: "8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f"
	I1216 05:03:48.461468  213062 cri.go:89] found id: ""
	I1216 05:03:48.461479  213062 logs.go:282] 1 containers: [8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f]
	I1216 05:03:48.461539  213062 ssh_runner.go:195] Run: which crictl
	I1216 05:03:48.465517  213062 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:03:48.465594  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:03:48.493032  213062 cri.go:89] found id: ""
	I1216 05:03:48.493057  213062 logs.go:282] 0 containers: []
	W1216 05:03:48.493067  213062 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:48.493074  213062 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:03:48.493133  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:03:48.521647  213062 cri.go:89] found id: ""
	I1216 05:03:48.521670  213062 logs.go:282] 0 containers: []
	W1216 05:03:48.521681  213062 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:03:48.521692  213062 logs.go:123] Gathering logs for kube-controller-manager [8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f] ...
	I1216 05:03:48.521707  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f"
	I1216 05:03:48.549382  213062 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:03:48.549407  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:03:48.610556  213062 logs.go:123] Gathering logs for container status ...
	I1216 05:03:48.610594  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:48.641692  213062 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:48.641717  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:48.730561  213062 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:48.730591  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:48.745561  213062 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:48.745588  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:48.806680  213062 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:48.806705  213062 logs.go:123] Gathering logs for kube-apiserver [0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71] ...
	I1216 05:03:48.806720  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71"
	I1216 05:03:48.837960  213062 logs.go:123] Gathering logs for kube-scheduler [d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54] ...
	I1216 05:03:48.837988  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54"
	I1216 05:03:51.367619  213062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:03:51.368094  213062 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1216 05:03:51.368158  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:03:51.368222  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:03:51.408717  213062 cri.go:89] found id: "0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71"
	I1216 05:03:51.408744  213062 cri.go:89] found id: ""
	I1216 05:03:51.408754  213062 logs.go:282] 1 containers: [0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71]
	I1216 05:03:51.408809  213062 ssh_runner.go:195] Run: which crictl
	I1216 05:03:51.413865  213062 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:03:51.413930  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:03:51.449851  213062 cri.go:89] found id: ""
	I1216 05:03:51.449879  213062 logs.go:282] 0 containers: []
	W1216 05:03:51.449887  213062 logs.go:284] No container was found matching "etcd"
	I1216 05:03:51.449893  213062 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:03:51.449940  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:03:51.487437  213062 cri.go:89] found id: ""
	I1216 05:03:51.487465  213062 logs.go:282] 0 containers: []
	W1216 05:03:51.487477  213062 logs.go:284] No container was found matching "coredns"
	I1216 05:03:51.487488  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:03:51.487560  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:03:51.526763  213062 cri.go:89] found id: "d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54"
	I1216 05:03:51.526786  213062 cri.go:89] found id: ""
	I1216 05:03:51.526797  213062 logs.go:282] 1 containers: [d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54]
	I1216 05:03:51.526865  213062 ssh_runner.go:195] Run: which crictl
	I1216 05:03:51.533151  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:03:51.533282  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:03:51.571271  213062 cri.go:89] found id: ""
	I1216 05:03:51.571306  213062 logs.go:282] 0 containers: []
	W1216 05:03:51.571320  213062 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:51.571330  213062 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:03:51.571395  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:03:51.604177  213062 cri.go:89] found id: "8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f"
	I1216 05:03:51.604213  213062 cri.go:89] found id: ""
	I1216 05:03:51.604222  213062 logs.go:282] 1 containers: [8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f]
	I1216 05:03:51.604280  213062 ssh_runner.go:195] Run: which crictl
	I1216 05:03:51.608503  213062 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:03:51.608581  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:03:51.645697  213062 cri.go:89] found id: ""
	I1216 05:03:51.645724  213062 logs.go:282] 0 containers: []
	W1216 05:03:51.645735  213062 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:51.645743  213062 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:03:51.645799  213062 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:03:51.681945  213062 cri.go:89] found id: ""
	I1216 05:03:51.681979  213062 logs.go:282] 0 containers: []
	W1216 05:03:51.681990  213062 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:03:51.682004  213062 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:51.682033  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:51.765743  213062 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:51.765769  213062 logs.go:123] Gathering logs for kube-apiserver [0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71] ...
	I1216 05:03:51.765785  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0ea5bba1bc94f8cd278caba32f9f0e97c7da7dc2a296e396a83420b6b6100e71"
	I1216 05:03:51.803526  213062 logs.go:123] Gathering logs for kube-scheduler [d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54] ...
	I1216 05:03:51.803558  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4205c503f3330ddd157f308dce339359232522de67cb0be8766ab04af8ddd54"
	I1216 05:03:51.833443  213062 logs.go:123] Gathering logs for kube-controller-manager [8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f] ...
	I1216 05:03:51.833479  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8a0835d3aa14fd7d13816b22563937168149ea726495b2a03c4583012985ab6f"
	I1216 05:03:51.861474  213062 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:03:51.861501  213062 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:03:51.794510  263814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:03:51.805713  263814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1216 05:03:51.819984  263814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:03:51.838220  263814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1216 05:03:51.854136  263814 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:03:51.859239  263814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:03:51.871189  263814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:03:51.973542  263814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:03:51.998693  263814 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631 for IP: 192.168.103.2
	I1216 05:03:51.998716  263814 certs.go:195] generating shared ca certs ...
	I1216 05:03:51.998736  263814 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:51.998891  263814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:03:51.998973  263814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:03:51.998985  263814 certs.go:257] generating profile certs ...
	I1216 05:03:51.999089  263814 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.key
	I1216 05:03:51.999107  263814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.crt with IP's: []
	I1216 05:03:52.062993  263814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.crt ...
	I1216 05:03:52.063034  263814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.crt: {Name:mkc89a3a6b9d4b16ec6127b8504f371ec5e67155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:52.063198  263814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.key ...
	I1216 05:03:52.063211  263814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.key: {Name:mkb3f5cff04f9241581fdfe0eb6f8a3cd8c7b3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:52.063293  263814 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key.44083b8e
	I1216 05:03:52.063308  263814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt.44083b8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 05:03:52.102790  263814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt.44083b8e ...
	I1216 05:03:52.102817  263814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt.44083b8e: {Name:mk35ccd65a9d5fbb41f2f4282ae87bbb797e4cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:52.102964  263814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key.44083b8e ...
	I1216 05:03:52.102983  263814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key.44083b8e: {Name:mk7c881d552af03cf0d9bce90132d68067419cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:52.103081  263814 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt.44083b8e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt
	I1216 05:03:52.103160  263814 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key.44083b8e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key
	I1216 05:03:52.103214  263814 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key
	I1216 05:03:52.103238  263814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.crt with IP's: []
	I1216 05:03:52.253391  263814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.crt ...
	I1216 05:03:52.253423  263814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.crt: {Name:mk48f82b0c7e79ad9e195645c203a04de4757e37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:52.253583  263814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key ...
	I1216 05:03:52.253598  263814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key: {Name:mkca18f5e80807dda0a62046e48e173febe9699f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:03:52.253779  263814 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:03:52.253817  263814 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:03:52.253827  263814 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:03:52.253852  263814 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:03:52.253877  263814 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:03:52.253899  263814 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:03:52.253939  263814 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:03:52.254551  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:03:52.273225  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:03:52.290977  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:03:52.308532  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:03:52.325770  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:03:52.344498  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:03:52.361648  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:03:52.378620  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:03:52.396617  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:03:52.420847  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:03:52.439080  263814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:03:52.457458  263814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:03:52.470371  263814 ssh_runner.go:195] Run: openssl version
	I1216 05:03:52.477006  263814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:03:52.484859  263814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:03:52.492776  263814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:03:52.496906  263814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:03:52.496953  263814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:03:52.532073  263814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:03:52.540430  263814 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:03:52.548229  263814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:03:52.556451  263814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:03:52.564340  263814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:03:52.568265  263814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:03:52.568325  263814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:03:52.604091  263814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:03:52.612908  263814 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:03:52.620593  263814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:03:52.628940  263814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:03:52.636837  263814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:03:52.640943  263814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:03:52.640992  263814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:03:52.675765  263814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:03:52.684171  263814 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:03:52.691693  263814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:03:52.695770  263814 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:03:52.695830  263814 kubeadm.go:401] StartCluster: {Name:no-preload-990631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:03:52.695918  263814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:03:52.695974  263814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:03:52.723702  263814 cri.go:89] found id: ""
	I1216 05:03:52.723760  263814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:03:52.732172  263814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:03:52.740692  263814 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:03:52.740742  263814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:03:52.749638  263814 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:03:52.749664  263814 kubeadm.go:158] found existing configuration files:
	
	I1216 05:03:52.749714  263814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:03:52.759122  263814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:03:52.759196  263814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:03:52.769885  263814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:03:52.779873  263814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:03:52.779932  263814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:03:52.788912  263814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:03:52.799151  263814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:03:52.799240  263814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:03:52.807622  263814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:03:52.815868  263814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:03:52.815930  263814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:03:52.823851  263814 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:03:52.859944  263814 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:03:52.860216  263814 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:03:52.923970  263814 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:03:52.924066  263814 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:03:52.924117  263814 kubeadm.go:319] OS: Linux
	I1216 05:03:52.924186  263814 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:03:52.924255  263814 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:03:52.924367  263814 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:03:52.924456  263814 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:03:52.924527  263814 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:03:52.924603  263814 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:03:52.924683  263814 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:03:52.924758  263814 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:03:52.988010  263814 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:03:52.988173  263814 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:03:52.988329  263814 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:03:53.003286  263814 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:03:53.009981  263814 out.go:252]   - Generating certificates and keys ...
	I1216 05:03:53.010089  263814 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:03:53.010185  263814 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:03:53.250541  263814 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:03:53.300650  263814 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:03:53.421564  263814 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:03:53.530796  263814 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:03:53.647078  263814 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:03:53.647289  263814 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-990631] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 05:03:53.696094  263814 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:03:53.696258  263814 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-990631] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 05:03:53.782535  263814 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 05:03:53.822272  263814 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 05:03:53.841427  263814 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 05:03:53.841537  263814 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:03:54.070443  263814 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:03:54.149383  263814 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:03:54.267350  263814 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:03:54.362469  263814 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:03:54.471703  263814 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:03:54.472310  263814 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:03:54.520261  263814 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:03:49.951278  268657 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:03:49.951475  268657 start.go:159] libmachine.API.Create for "embed-certs-127599" (driver="docker")
	I1216 05:03:49.951507  268657 client.go:173] LocalClient.Create starting
	I1216 05:03:49.951583  268657 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:03:49.951641  268657 main.go:143] libmachine: Decoding PEM data...
	I1216 05:03:49.951670  268657 main.go:143] libmachine: Parsing certificate...
	I1216 05:03:49.951740  268657 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:03:49.951771  268657 main.go:143] libmachine: Decoding PEM data...
	I1216 05:03:49.951812  268657 main.go:143] libmachine: Parsing certificate...
	I1216 05:03:49.952213  268657 cli_runner.go:164] Run: docker network inspect embed-certs-127599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:03:49.970083  268657 cli_runner.go:211] docker network inspect embed-certs-127599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:03:49.970168  268657 network_create.go:284] running [docker network inspect embed-certs-127599] to gather additional debugging logs...
	I1216 05:03:49.970198  268657 cli_runner.go:164] Run: docker network inspect embed-certs-127599
	W1216 05:03:49.988293  268657 cli_runner.go:211] docker network inspect embed-certs-127599 returned with exit code 1
	I1216 05:03:49.988332  268657 network_create.go:287] error running [docker network inspect embed-certs-127599]: docker network inspect embed-certs-127599: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-127599 not found
	I1216 05:03:49.988350  268657 network_create.go:289] output of [docker network inspect embed-certs-127599]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-127599 not found
	
	** /stderr **
	I1216 05:03:49.988452  268657 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:03:50.007681  268657 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:03:50.008532  268657 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:03:50.009400  268657 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:03:50.010972  268657 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f90356ecb2b1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:f4:5a:57:87:23} reservation:<nil>}
	I1216 05:03:50.011985  268657 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-333da0694841 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e2:1b:19:25:b4:78} reservation:<nil>}
	I1216 05:03:50.012883  268657 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011c2b0}
	I1216 05:03:50.012914  268657 network_create.go:124] attempt to create docker network embed-certs-127599 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 05:03:50.012969  268657 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-127599 embed-certs-127599
	I1216 05:03:50.063970  268657 network_create.go:108] docker network embed-certs-127599 192.168.94.0/24 created
	I1216 05:03:50.064005  268657 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-127599" container
	I1216 05:03:50.064097  268657 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:03:50.081358  268657 cli_runner.go:164] Run: docker volume create embed-certs-127599 --label name.minikube.sigs.k8s.io=embed-certs-127599 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:03:50.100943  268657 oci.go:103] Successfully created a docker volume embed-certs-127599
	I1216 05:03:50.101095  268657 cli_runner.go:164] Run: docker run --rm --name embed-certs-127599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-127599 --entrypoint /usr/bin/test -v embed-certs-127599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:03:51.146497  268657 cli_runner.go:217] Completed: docker run --rm --name embed-certs-127599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-127599 --entrypoint /usr/bin/test -v embed-certs-127599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.045332495s)
	I1216 05:03:51.146533  268657 oci.go:107] Successfully prepared a docker volume embed-certs-127599
	I1216 05:03:51.146569  268657 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:03:51.146584  268657 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:03:51.146656  268657 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-127599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 16 05:03:41 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:41.979395627Z" level=info msg="Starting container: 303639a0915f5c5cbffc686a9601ba6bf54ddefdf5ddce4cabbf1fb0d3da3ad2" id=63b4a8c5-0334-49a7-9a2a-a2866800c30e name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:03:41 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:41.981438343Z" level=info msg="Started container" PID=2190 containerID=303639a0915f5c5cbffc686a9601ba6bf54ddefdf5ddce4cabbf1fb0d3da3ad2 description=kube-system/coredns-5dd5756b68-fplzg/coredns id=63b4a8c5-0334-49a7-9a2a-a2866800c30e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1162f9e52ce77ff5a6d72bf652d304ed85d60a8acc0af65e9328593522e1eb4
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.097739588Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bfb5f2a5-b841-4723-9eac-5192fc93c2cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.097827771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.103987124Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bbbd2cb5dcc9a76a0b371739036ce269d68d38b28f36264f4ea6bd44e66bb3aa UID:be3c107e-accd-498b-b511-153da24708a6 NetNS:/var/run/netns/a2e0ce2b-d3ff-46a0-85e0-9bc7ecc8f71d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000286f60}] Aliases:map[]}"
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.104036851Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.115426533Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bbbd2cb5dcc9a76a0b371739036ce269d68d38b28f36264f4ea6bd44e66bb3aa UID:be3c107e-accd-498b-b511-153da24708a6 NetNS:/var/run/netns/a2e0ce2b-d3ff-46a0-85e0-9bc7ecc8f71d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000286f60}] Aliases:map[]}"
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.115592163Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.116581452Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.117808935Z" level=info msg="Ran pod sandbox bbbd2cb5dcc9a76a0b371739036ce269d68d38b28f36264f4ea6bd44e66bb3aa with infra container: default/busybox/POD" id=bfb5f2a5-b841-4723-9eac-5192fc93c2cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.119049162Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b67440c8-e855-4504-b2ff-803f68dcc6c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.119164842Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b67440c8-e855-4504-b2ff-803f68dcc6c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.119209453Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b67440c8-e855-4504-b2ff-803f68dcc6c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.119808479Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a36c7df1-2363-4992-a0db-55aa9f255902 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:03:45 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:45.121571874Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.456354427Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a36c7df1-2363-4992-a0db-55aa9f255902 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.457331512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9cfb7aa9-643f-44d2-bae5-aa26babdb40f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.458957406Z" level=info msg="Creating container: default/busybox/busybox" id=1f5ff757-a630-4807-a73d-5e75b4a400d1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.459102199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.463517643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.464120199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.490490411Z" level=info msg="Created container 8b622d86efafefe80572d72ef717dbecef9f3b032df617622a2dac86efd5c66d: default/busybox/busybox" id=1f5ff757-a630-4807-a73d-5e75b4a400d1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.491172366Z" level=info msg="Starting container: 8b622d86efafefe80572d72ef717dbecef9f3b032df617622a2dac86efd5c66d" id=ec24fcf4-ecb9-4df7-bf58-845b3e1f9aff name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:03:46 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:46.49280766Z" level=info msg="Started container" PID=2253 containerID=8b622d86efafefe80572d72ef717dbecef9f3b032df617622a2dac86efd5c66d description=default/busybox/busybox id=ec24fcf4-ecb9-4df7-bf58-845b3e1f9aff name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbbd2cb5dcc9a76a0b371739036ce269d68d38b28f36264f4ea6bd44e66bb3aa
	Dec 16 05:03:53 old-k8s-version-545075 crio[782]: time="2025-12-16T05:03:53.877787199Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8b622d86efafe       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   bbbd2cb5dcc9a       busybox                                          default
	303639a0915f5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   a1162f9e52ce7       coredns-5dd5756b68-fplzg                         kube-system
	b1b7e31db9189       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   7b1b93fc5e89e       storage-provisioner                              kube-system
	54f37369d6c3b       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   1761eb867d2a5       kindnet-r74zw                                    kube-system
	672b260754ae7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   9854ed7cd7d1c       kube-proxy-7pldc                                 kube-system
	cd520ca1d3cb1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   f55ac31cfa0e4       etcd-old-k8s-version-545075                      kube-system
	28bb23055d9c5       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   17c5280b2f9d7       kube-apiserver-old-k8s-version-545075            kube-system
	c786ac1e415f9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   2369d723dee71       kube-controller-manager-old-k8s-version-545075   kube-system
	4cde93e72af46       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   0671e20ca1e44       kube-scheduler-old-k8s-version-545075            kube-system
	
	
	==> coredns [303639a0915f5c5cbffc686a9601ba6bf54ddefdf5ddce4cabbf1fb0d3da3ad2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36979 - 22992 "HINFO IN 6590198753447444571.7242737249290213217. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014545731s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-545075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-545075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=old-k8s-version-545075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_03_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:03:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-545075
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:03:46 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:03:46 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:03:46 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:03:46 +0000   Tue, 16 Dec 2025 05:03:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-545075
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                37fbaf4b-0aa1-43d5-a5d3-708d61d6a5be
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-fplzg                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-545075                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-r74zw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-545075             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-545075    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-7pldc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-545075             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-545075 event: Registered Node old-k8s-version-545075 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-545075 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [cd520ca1d3cb186c8037c0223cf7d048c5ac84fb39a630ad10c31fd2e90d0522] <==
	{"level":"info","ts":"2025-12-16T05:03:10.991802Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-16T05:03:10.991929Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T05:03:10.992578Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T05:03:10.992105Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-16T05:03:10.992146Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-16T05:03:11.980667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-16T05:03:11.980725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-16T05:03:11.98074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-16T05:03:11.980751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-16T05:03:11.980757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-16T05:03:11.980764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-16T05:03:11.980771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-16T05:03:11.981634Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:03:11.98176Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-545075 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-16T05:03:11.981789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T05:03:11.981769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T05:03:11.982309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:03:11.982396Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:03:11.98242Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:03:11.982504Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-16T05:03:11.982529Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-16T05:03:11.98367Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-16T05:03:11.984351Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2025-12-16T05:03:54.097582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.840396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1122"}
	{"level":"info","ts":"2025-12-16T05:03:54.097685Z","caller":"traceutil/trace.go:171","msg":"trace[857548749] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:474; }","duration":"104.961779ms","start":"2025-12-16T05:03:53.992702Z","end":"2025-12-16T05:03:54.097664Z","steps":["trace[857548749] 'range keys from in-memory index tree'  (duration: 104.678082ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:03:56 up 46 min,  0 user,  load average: 2.46, 2.24, 1.70
	Linux old-k8s-version-545075 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54f37369d6c3b59779cb2802f19908f2aef656b300aebf98f66c59683e266a36] <==
	I1216 05:03:30.715982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:03:30.809786       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 05:03:30.810092       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:03:30.810117       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:03:30.810143       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:03:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:03:31.014165       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:03:31.014981       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:03:31.015013       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:03:31.015234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:03:31.415654       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:03:31.415680       1 metrics.go:72] Registering metrics
	I1216 05:03:31.415735       1 controller.go:711] "Syncing nftables rules"
	I1216 05:03:41.018144       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:03:41.018199       1 main.go:301] handling current node
	I1216 05:03:51.017069       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:03:51.017112       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28bb23055d9c5f8b9ca3c400927fa5edbda229086f03fe5c2e84cee945f461c2] <==
	I1216 05:03:12.997584       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1216 05:03:12.997597       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1216 05:03:12.998148       1 controller.go:624] quota admission added evaluator for: namespaces
	I1216 05:03:13.007383       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1216 05:03:13.007912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1216 05:03:13.007955       1 aggregator.go:166] initial CRD sync complete...
	I1216 05:03:13.007963       1 autoregister_controller.go:141] Starting autoregister controller
	I1216 05:03:13.007970       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:03:13.007977       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:03:13.201178       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:03:13.902441       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 05:03:13.906857       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 05:03:13.906900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:03:14.322352       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:03:14.358758       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:03:14.408288       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 05:03:14.414185       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1216 05:03:14.415491       1 controller.go:624] quota admission added evaluator for: endpoints
	I1216 05:03:14.420870       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:03:14.939188       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1216 05:03:15.842414       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1216 05:03:15.854490       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 05:03:15.863451       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1216 05:03:28.446304       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1216 05:03:28.697804       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c786ac1e415f9021ee8c53ab9da96abd4264d5be9aaf2c5aa367fa30484a8e62] <==
	I1216 05:03:28.322464       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 05:03:28.345706       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 05:03:28.345740       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1216 05:03:28.454863       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7pldc"
	I1216 05:03:28.456006       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r74zw"
	I1216 05:03:28.701682       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1216 05:03:28.814305       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lc8rj"
	I1216 05:03:28.823010       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fplzg"
	I1216 05:03:28.830276       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.600457ms"
	I1216 05:03:28.841071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.731462ms"
	I1216 05:03:28.841223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.173µs"
	I1216 05:03:28.842514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.687µs"
	I1216 05:03:29.444273       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1216 05:03:29.469889       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-lc8rj"
	I1216 05:03:29.487264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.747725ms"
	I1216 05:03:29.499275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.874455ms"
	I1216 05:03:29.499399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.401µs"
	I1216 05:03:41.611929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="126.683µs"
	I1216 05:03:41.636804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.179µs"
	I1216 05:03:42.012645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.712µs"
	I1216 05:03:42.964059       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1216 05:03:42.964249       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1216 05:03:42.964271       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fplzg" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fplzg"
	I1216 05:03:43.021316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.530202ms"
	I1216 05:03:43.021417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.993µs"
	
	
	==> kube-proxy [672b260754ae7714d947b6414bf81384c954779dbe297b93fa6c92138752c0e5] <==
	I1216 05:03:28.892349       1 server_others.go:69] "Using iptables proxy"
	I1216 05:03:28.906158       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1216 05:03:28.936167       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:03:28.939887       1 server_others.go:152] "Using iptables Proxier"
	I1216 05:03:28.939998       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1216 05:03:28.940053       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1216 05:03:28.940109       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1216 05:03:28.940716       1 server.go:846] "Version info" version="v1.28.0"
	I1216 05:03:28.940881       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:03:28.941960       1 config.go:188] "Starting service config controller"
	I1216 05:03:28.956443       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1216 05:03:28.944168       1 config.go:97] "Starting endpoint slice config controller"
	I1216 05:03:28.956651       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1216 05:03:28.944963       1 config.go:315] "Starting node config controller"
	I1216 05:03:28.956708       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1216 05:03:29.056637       1 shared_informer.go:318] Caches are synced for service config
	I1216 05:03:29.056821       1 shared_informer.go:318] Caches are synced for node config
	I1216 05:03:29.056852       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4cde93e72af46e02fb2dfb6976fcf52bec3519410229a213e2c3fb02741c44a1] <==
	E1216 05:03:12.952794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 05:03:12.952796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1216 05:03:12.952882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 05:03:12.952935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1216 05:03:12.952949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 05:03:12.952980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1216 05:03:12.952986       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 05:03:12.953003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1216 05:03:12.953202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 05:03:12.953223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1216 05:03:13.815468       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 05:03:13.815498       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:03:13.868588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 05:03:13.868616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1216 05:03:13.892558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 05:03:13.892596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1216 05:03:13.976006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 05:03:13.976098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1216 05:03:14.011320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 05:03:14.011355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1216 05:03:14.172298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 05:03:14.172340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1216 05:03:14.183507       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 05:03:14.183624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1216 05:03:16.348906       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 05:03:27 old-k8s-version-545075 kubelet[1419]: I1216 05:03:27.850422    1419 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 05:03:27 old-k8s-version-545075 kubelet[1419]: I1216 05:03:27.853370    1419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.460991    1419 topology_manager.go:215] "Topology Admit Handler" podUID="0fdb902a-9463-46da-835f-598868f4d79b" podNamespace="kube-system" podName="kube-proxy-7pldc"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.462014    1419 topology_manager.go:215] "Topology Admit Handler" podUID="7905ab47-f1e5-4003-bc10-05de82340a76" podNamespace="kube-system" podName="kindnet-r74zw"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.490883    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tnxq\" (UniqueName: \"kubernetes.io/projected/0fdb902a-9463-46da-835f-598868f4d79b-kube-api-access-6tnxq\") pod \"kube-proxy-7pldc\" (UID: \"0fdb902a-9463-46da-835f-598868f4d79b\") " pod="kube-system/kube-proxy-7pldc"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.490937    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7905ab47-f1e5-4003-bc10-05de82340a76-cni-cfg\") pod \"kindnet-r74zw\" (UID: \"7905ab47-f1e5-4003-bc10-05de82340a76\") " pod="kube-system/kindnet-r74zw"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.490980    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fdb902a-9463-46da-835f-598868f4d79b-kube-proxy\") pod \"kube-proxy-7pldc\" (UID: \"0fdb902a-9463-46da-835f-598868f4d79b\") " pod="kube-system/kube-proxy-7pldc"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.491008    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7905ab47-f1e5-4003-bc10-05de82340a76-lib-modules\") pod \"kindnet-r74zw\" (UID: \"7905ab47-f1e5-4003-bc10-05de82340a76\") " pod="kube-system/kindnet-r74zw"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.491155    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fdb902a-9463-46da-835f-598868f4d79b-xtables-lock\") pod \"kube-proxy-7pldc\" (UID: \"0fdb902a-9463-46da-835f-598868f4d79b\") " pod="kube-system/kube-proxy-7pldc"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.491199    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fdb902a-9463-46da-835f-598868f4d79b-lib-modules\") pod \"kube-proxy-7pldc\" (UID: \"0fdb902a-9463-46da-835f-598868f4d79b\") " pod="kube-system/kube-proxy-7pldc"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.491234    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7905ab47-f1e5-4003-bc10-05de82340a76-xtables-lock\") pod \"kindnet-r74zw\" (UID: \"7905ab47-f1e5-4003-bc10-05de82340a76\") " pod="kube-system/kindnet-r74zw"
	Dec 16 05:03:28 old-k8s-version-545075 kubelet[1419]: I1216 05:03:28.491267    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svwd\" (UniqueName: \"kubernetes.io/projected/7905ab47-f1e5-4003-bc10-05de82340a76-kube-api-access-7svwd\") pod \"kindnet-r74zw\" (UID: \"7905ab47-f1e5-4003-bc10-05de82340a76\") " pod="kube-system/kindnet-r74zw"
	Dec 16 05:03:30 old-k8s-version-545075 kubelet[1419]: I1216 05:03:30.008067    1419 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7pldc" podStartSLOduration=2.008004566 podCreationTimestamp="2025-12-16 05:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:03:28.977121369 +0000 UTC m=+13.158488676" watchObservedRunningTime="2025-12-16 05:03:30.008004566 +0000 UTC m=+14.189371874"
	Dec 16 05:03:30 old-k8s-version-545075 kubelet[1419]: I1216 05:03:30.980987    1419 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-r74zw" podStartSLOduration=1.264621362 podCreationTimestamp="2025-12-16 05:03:28 +0000 UTC" firstStartedPulling="2025-12-16 05:03:28.772344822 +0000 UTC m=+12.953712121" lastFinishedPulling="2025-12-16 05:03:30.488650397 +0000 UTC m=+14.670017700" observedRunningTime="2025-12-16 05:03:30.980494594 +0000 UTC m=+15.161861902" watchObservedRunningTime="2025-12-16 05:03:30.980926941 +0000 UTC m=+15.162294249"
	Dec 16 05:03:41 old-k8s-version-545075 kubelet[1419]: I1216 05:03:41.372949    1419 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 16 05:03:41 old-k8s-version-545075 kubelet[1419]: I1216 05:03:41.563338    1419 topology_manager.go:215] "Topology Admit Handler" podUID="6ee6ea7e-845f-4594-8bbe-a0c45e39e694" podNamespace="kube-system" podName="storage-provisioner"
	Dec 16 05:03:41 old-k8s-version-545075 kubelet[1419]: I1216 05:03:41.585518    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl994\" (UniqueName: \"kubernetes.io/projected/6ee6ea7e-845f-4594-8bbe-a0c45e39e694-kube-api-access-dl994\") pod \"storage-provisioner\" (UID: \"6ee6ea7e-845f-4594-8bbe-a0c45e39e694\") " pod="kube-system/storage-provisioner"
	Dec 16 05:03:41 old-k8s-version-545075 kubelet[1419]: I1216 05:03:41.585575    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6ee6ea7e-845f-4594-8bbe-a0c45e39e694-tmp\") pod \"storage-provisioner\" (UID: \"6ee6ea7e-845f-4594-8bbe-a0c45e39e694\") " pod="kube-system/storage-provisioner"
	Dec 16 05:03:41 old-k8s-version-545075 kubelet[1419]: I1216 05:03:41.612449    1419 topology_manager.go:215] "Topology Admit Handler" podUID="c0e4aa41-d754-405c-83b6-53a5dd2fc511" podNamespace="kube-system" podName="coredns-5dd5756b68-fplzg"
	Dec 16 05:03:41 old-k8s-version-545075 kubelet[1419]: I1216 05:03:41.686301    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e4aa41-d754-405c-83b6-53a5dd2fc511-config-volume\") pod \"coredns-5dd5756b68-fplzg\" (UID: \"c0e4aa41-d754-405c-83b6-53a5dd2fc511\") " pod="kube-system/coredns-5dd5756b68-fplzg"
	Dec 16 05:03:41 old-k8s-version-545075 kubelet[1419]: I1216 05:03:41.686541    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthjm\" (UniqueName: \"kubernetes.io/projected/c0e4aa41-d754-405c-83b6-53a5dd2fc511-kube-api-access-sthjm\") pod \"coredns-5dd5756b68-fplzg\" (UID: \"c0e4aa41-d754-405c-83b6-53a5dd2fc511\") " pod="kube-system/coredns-5dd5756b68-fplzg"
	Dec 16 05:03:42 old-k8s-version-545075 kubelet[1419]: I1216 05:03:42.027839    1419 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fplzg" podStartSLOduration=14.027778312 podCreationTimestamp="2025-12-16 05:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:03:42.012644259 +0000 UTC m=+26.194011570" watchObservedRunningTime="2025-12-16 05:03:42.027778312 +0000 UTC m=+26.209145621"
	Dec 16 05:03:42 old-k8s-version-545075 kubelet[1419]: I1216 05:03:42.028383    1419 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.028339145 podCreationTimestamp="2025-12-16 05:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:03:42.028234342 +0000 UTC m=+26.209601645" watchObservedRunningTime="2025-12-16 05:03:42.028339145 +0000 UTC m=+26.209706463"
	Dec 16 05:03:44 old-k8s-version-545075 kubelet[1419]: I1216 05:03:44.795850    1419 topology_manager.go:215] "Topology Admit Handler" podUID="be3c107e-accd-498b-b511-153da24708a6" podNamespace="default" podName="busybox"
	Dec 16 05:03:44 old-k8s-version-545075 kubelet[1419]: I1216 05:03:44.907244    1419 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265f8\" (UniqueName: \"kubernetes.io/projected/be3c107e-accd-498b-b511-153da24708a6-kube-api-access-265f8\") pod \"busybox\" (UID: \"be3c107e-accd-498b-b511-153da24708a6\") " pod="default/busybox"
	
	
	==> storage-provisioner [b1b7e31db91890eeddc672ba3593196149df62041c0f099db7ec85a3bb867504] <==
	I1216 05:03:41.921043       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:03:41.931875       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:03:41.932004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 05:03:41.945964       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:03:41.946414       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c47abbd-d74f-4d30-b214-42c151476c8d", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-545075_c96083e3-a80c-42c3-998c-e5ab31ba0500 became leader
	I1216 05:03:41.946556       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545075_c96083e3-a80c-42c3-998c-e5ab31ba0500!
	I1216 05:03:42.047866       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545075_c96083e3-a80c-42c3-998c-e5ab31ba0500!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-545075 -n old-k8s-version-545075
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-545075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.459949ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:04:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-990631 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-990631 describe deploy/metrics-server -n kube-system: exit status 1 (75.544917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-990631 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-990631
helpers_test.go:244: (dbg) docker inspect no-preload-990631:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1",
	        "Created": "2025-12-16T05:03:37.47946154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264259,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:03:37.518039522Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/hosts",
	        "LogPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1-json.log",
	        "Name": "/no-preload-990631",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-990631:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-990631",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1",
	                "LowerDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-990631",
	                "Source": "/var/lib/docker/volumes/no-preload-990631/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-990631",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-990631",
	                "name.minikube.sigs.k8s.io": "no-preload-990631",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "59ce3e5f0fc7c14459c9401c4c76fbbc2a6b1150a8726c9775fbe4417985ccc4",
	            "SandboxKey": "/var/run/docker/netns/59ce3e5f0fc7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-990631": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a201df1b3d2f7b8b13672d056e69c36960ed0846b69e35e8077f1fc3c7bcec36",
	                    "EndpointID": "68686c751314bfed7bf73dc0a441a440646450e89e29962c438ee732b5cad549",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:e0:2f:25:5f:a6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-990631",
	                        "ac137694b742"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-990631 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-990631 logs -n 25: (1.038474307s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-548714 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo containerd config dump                                                                                                                                                                                                  │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo crio config                                                                                                                                                                                                             │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ delete  │ -p cilium-548714                                                                                                                                                                                                                              │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:02 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p stopped-upgrade-572334                                                                                                                                                                                                                     │ stopped-upgrade-572334       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p cert-expiration-123545                                                                                                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	│ stop    │ -p old-k8s-version-545075 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                  │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                               │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:04:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:04:17.239730  276917 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:04:17.239836  276917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:17.239845  276917 out.go:374] Setting ErrFile to fd 2...
	I1216 05:04:17.239849  276917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:17.240110  276917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:04:17.240562  276917 out.go:368] Setting JSON to false
	I1216 05:04:17.241674  276917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2808,"bootTime":1765858649,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:04:17.241730  276917 start.go:143] virtualization: kvm guest
	I1216 05:04:17.243623  276917 out.go:179] * [default-k8s-diff-port-375252] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:04:17.244714  276917 notify.go:221] Checking for updates...
	I1216 05:04:17.244726  276917 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:04:17.245901  276917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:04:17.247053  276917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:17.248192  276917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:04:17.249225  276917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:04:17.250228  276917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:04:17.251890  276917 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:04:17.252071  276917 config.go:182] Loaded profile config "no-preload-990631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:04:17.252243  276917 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:04:17.252357  276917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:04:17.276781  276917 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:04:17.276886  276917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:17.336528  276917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:04:17.325910031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:17.336636  276917 docker.go:319] overlay module found
	I1216 05:04:17.339812  276917 out.go:179] * Using the docker driver based on user configuration
	I1216 05:04:17.341312  276917 start.go:309] selected driver: docker
	I1216 05:04:17.341327  276917 start.go:927] validating driver "docker" against <nil>
	I1216 05:04:17.341338  276917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:04:17.341908  276917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:17.403292  276917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:04:17.394090177 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:17.403473  276917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:04:17.403670  276917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:17.405247  276917 out.go:179] * Using Docker driver with root privileges
	I1216 05:04:17.406404  276917 cni.go:84] Creating CNI manager for ""
	I1216 05:04:17.406467  276917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:17.406478  276917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:04:17.406548  276917 start.go:353] cluster config:
	{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:17.407726  276917 out.go:179] * Starting "default-k8s-diff-port-375252" primary control-plane node in "default-k8s-diff-port-375252" cluster
	I1216 05:04:17.408731  276917 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:04:17.409733  276917 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:04:17.410754  276917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:17.410801  276917 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:04:17.410823  276917 cache.go:65] Caching tarball of preloaded images
	I1216 05:04:17.410842  276917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:04:17.410934  276917 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:04:17.410950  276917 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:04:17.411090  276917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:04:17.411123  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json: {Name:mk05b7e9410b298d89eb25ce0d100aa97d39be7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:17.433925  276917 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:04:17.433950  276917 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:04:17.433975  276917 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:04:17.434012  276917 start.go:360] acquireMachinesLock for default-k8s-diff-port-375252: {Name:mkb8cd554afbe4d541db5e350d6207558dad5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:04:17.434172  276917 start.go:364] duration metric: took 113.217µs to acquireMachinesLock for "default-k8s-diff-port-375252"
	I1216 05:04:17.434209  276917 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:04:17.434328  276917 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:04:13.623160  275199 out.go:252] * Restarting existing docker container for "old-k8s-version-545075" ...
	I1216 05:04:13.623247  275199 cli_runner.go:164] Run: docker start old-k8s-version-545075
	I1216 05:04:13.908616  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:13.934940  275199 kic.go:430] container "old-k8s-version-545075" state is running.
	I1216 05:04:13.935723  275199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545075
	I1216 05:04:13.958271  275199 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/config.json ...
	I1216 05:04:13.958554  275199 machine.go:94] provisionDockerMachine start ...
	I1216 05:04:13.958647  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:13.977884  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:13.978168  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:13.978185  275199 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:04:13.978810  275199 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44444->127.0.0.1:33076: read: connection reset by peer
	I1216 05:04:17.109284  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-545075
	
	I1216 05:04:17.109315  275199 ubuntu.go:182] provisioning hostname "old-k8s-version-545075"
	I1216 05:04:17.109376  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.128599  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:17.128914  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:17.128938  275199 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-545075 && echo "old-k8s-version-545075" | sudo tee /etc/hostname
	I1216 05:04:17.266529  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-545075
	
	I1216 05:04:17.266606  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.289002  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:17.289275  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:17.289301  275199 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-545075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-545075/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-545075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:04:17.423741  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:04:17.423769  275199 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:04:17.423807  275199 ubuntu.go:190] setting up certificates
	I1216 05:04:17.423823  275199 provision.go:84] configureAuth start
	I1216 05:04:17.423899  275199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545075
	I1216 05:04:17.444658  275199 provision.go:143] copyHostCerts
	I1216 05:04:17.444747  275199 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:04:17.444765  275199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:04:17.444852  275199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:04:17.445003  275199 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:04:17.445029  275199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:04:17.445083  275199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:04:17.445189  275199 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:04:17.445199  275199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:04:17.445237  275199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:04:17.445343  275199 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-545075 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-545075]
	I1216 05:04:17.535353  275199 provision.go:177] copyRemoteCerts
	I1216 05:04:17.535430  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:04:17.535484  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.554976  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:17.651925  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:04:17.671847  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:04:17.690206  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 05:04:17.709206  275199 provision.go:87] duration metric: took 285.363629ms to configureAuth
	I1216 05:04:17.709230  275199 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:04:17.709416  275199 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:04:17.709535  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.728840  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:17.729149  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:17.729177  275199 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:04:18.048941  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:04:18.048978  275199 machine.go:97] duration metric: took 4.090396662s to provisionDockerMachine
	I1216 05:04:18.048995  275199 start.go:293] postStartSetup for "old-k8s-version-545075" (driver="docker")
	I1216 05:04:18.049009  275199 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:04:18.049094  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:04:18.049143  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.070716  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:18.174849  275199 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:04:18.178858  275199 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:04:18.178890  275199 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:04:18.178902  275199 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:04:18.178972  275199 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:04:18.179084  275199 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:04:18.179197  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:04:18.187585  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:18.206234  275199 start.go:296] duration metric: took 157.224509ms for postStartSetup
	I1216 05:04:18.206319  275199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:04:18.206375  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.227959  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:18.319324  275199 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:04:18.325627  275199 fix.go:56] duration metric: took 4.72762935s for fixHost
	I1216 05:04:18.325660  275199 start.go:83] releasing machines lock for "old-k8s-version-545075", held for 4.727688258s
	I1216 05:04:18.325730  275199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545075
	I1216 05:04:18.343905  275199 ssh_runner.go:195] Run: cat /version.json
	I1216 05:04:18.343960  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.344051  275199 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:04:18.344124  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.366732  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:18.366903  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:16.069242  268657 addons.go:530] duration metric: took 485.243042ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:04:16.371204  268657 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-127599" context rescaled to 1 replicas
	W1216 05:04:17.872334  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	I1216 05:04:18.520711  275199 ssh_runner.go:195] Run: systemctl --version
	I1216 05:04:18.527414  275199 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:04:18.564193  275199 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:04:18.570338  275199 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:04:18.570402  275199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:04:18.579999  275199 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:04:18.580040  275199 start.go:496] detecting cgroup driver to use...
	I1216 05:04:18.580083  275199 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:04:18.580157  275199 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:04:18.599791  275199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:04:18.615000  275199 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:04:18.615077  275199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:04:18.630705  275199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:04:18.643660  275199 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:04:18.742403  275199 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:04:18.833821  275199 docker.go:234] disabling docker service ...
	I1216 05:04:18.833882  275199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:04:18.848486  275199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:04:18.862796  275199 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:04:18.971102  275199 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:04:19.076455  275199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:04:19.089401  275199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:04:19.104200  275199 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1216 05:04:19.104261  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.113550  275199 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:04:19.113612  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.123008  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.132370  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.141529  275199 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:04:19.149888  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.159325  275199 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.170473  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.180250  275199 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:04:19.188205  275199 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:04:19.195791  275199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:19.275382  275199 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:04:21.353547  275199 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.078110101s)
	I1216 05:04:21.353579  275199 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:04:21.353630  275199 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:04:21.357833  275199 start.go:564] Will wait 60s for crictl version
	I1216 05:04:21.357898  275199 ssh_runner.go:195] Run: which crictl
	I1216 05:04:21.361802  275199 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:04:21.387774  275199 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:04:21.387856  275199 ssh_runner.go:195] Run: crio --version
	I1216 05:04:21.421094  275199 ssh_runner.go:195] Run: crio --version
	I1216 05:04:21.458278  275199 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	W1216 05:04:18.532992  263814 node_ready.go:57] node "no-preload-990631" has "Ready":"False" status (will retry)
	I1216 05:04:19.032914  263814 node_ready.go:49] node "no-preload-990631" is "Ready"
	I1216 05:04:19.032956  263814 node_ready.go:38] duration metric: took 12.503090752s for node "no-preload-990631" to be "Ready" ...
	I1216 05:04:19.032974  263814 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:04:19.033049  263814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:19.045687  263814 api_server.go:72] duration metric: took 12.961240582s to wait for apiserver process to appear ...
	I1216 05:04:19.045719  263814 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:04:19.045746  263814 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:04:19.052682  263814 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 05:04:19.053785  263814 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:04:19.053815  263814 api_server.go:131] duration metric: took 8.088348ms to wait for apiserver health ...
	I1216 05:04:19.053826  263814 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:04:19.057753  263814 system_pods.go:59] 8 kube-system pods found
	I1216 05:04:19.057797  263814 system_pods.go:61] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.057805  263814 system_pods.go:61] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.057811  263814 system_pods.go:61] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.057817  263814 system_pods.go:61] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.057821  263814 system_pods.go:61] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.057825  263814 system_pods.go:61] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.057828  263814 system_pods.go:61] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.057833  263814 system_pods.go:61] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.057838  263814 system_pods.go:74] duration metric: took 4.007046ms to wait for pod list to return data ...
	I1216 05:04:19.057849  263814 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:04:19.060860  263814 default_sa.go:45] found service account: "default"
	I1216 05:04:19.060885  263814 default_sa.go:55] duration metric: took 3.028868ms for default service account to be created ...
	I1216 05:04:19.060895  263814 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:04:19.064065  263814 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:19.064099  263814 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.064108  263814 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.064128  263814 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.064137  263814 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.064159  263814 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.064166  263814 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.064172  263814 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.064179  263814 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.064212  263814 retry.go:31] will retry after 218.018541ms: missing components: kube-dns
	I1216 05:04:19.286358  263814 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:19.286390  263814 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.286395  263814 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.286402  263814 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.286407  263814 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.286411  263814 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.286415  263814 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.286418  263814 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.286423  263814 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.286436  263814 retry.go:31] will retry after 382.279473ms: missing components: kube-dns
	I1216 05:04:19.825322  263814 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:19.825357  263814 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.825367  263814 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.825374  263814 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.825382  263814 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.825387  263814 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.825394  263814 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.825399  263814 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.825406  263814 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.825418  263814 system_pods.go:126] duration metric: took 764.515965ms to wait for k8s-apps to be running ...
	I1216 05:04:19.825431  263814 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:04:19.825481  263814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:04:19.841650  263814 system_svc.go:56] duration metric: took 16.210649ms WaitForService to wait for kubelet
	I1216 05:04:19.841678  263814 kubeadm.go:587] duration metric: took 13.757240355s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:19.841697  263814 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:04:20.078616  263814 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:04:20.078651  263814 node_conditions.go:123] node cpu capacity is 8
	I1216 05:04:20.078671  263814 node_conditions.go:105] duration metric: took 236.969298ms to run NodePressure ...
	I1216 05:04:20.078690  263814 start.go:242] waiting for startup goroutines ...
	I1216 05:04:20.078699  263814 start.go:247] waiting for cluster config update ...
	I1216 05:04:20.078711  263814 start.go:256] writing updated cluster config ...
	I1216 05:04:20.079046  263814 ssh_runner.go:195] Run: rm -f paused
	I1216 05:04:20.088292  263814 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:20.177230  263814 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.202814  263814 pod_ready.go:94] pod "coredns-7d764666f9-7l2ds" is "Ready"
	I1216 05:04:20.202846  263814 pod_ready.go:86] duration metric: took 25.583765ms for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.205136  263814 pod_ready.go:83] waiting for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.209844  263814 pod_ready.go:94] pod "etcd-no-preload-990631" is "Ready"
	I1216 05:04:20.209866  263814 pod_ready.go:86] duration metric: took 4.708808ms for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.211785  263814 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.215774  263814 pod_ready.go:94] pod "kube-apiserver-no-preload-990631" is "Ready"
	I1216 05:04:20.215801  263814 pod_ready.go:86] duration metric: took 3.995361ms for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.217762  263814 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.492729  263814 pod_ready.go:94] pod "kube-controller-manager-no-preload-990631" is "Ready"
	I1216 05:04:20.492755  263814 pod_ready.go:86] duration metric: took 274.97408ms for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.692751  263814 pod_ready.go:83] waiting for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.117132  263814 pod_ready.go:94] pod "kube-proxy-2bjbr" is "Ready"
	I1216 05:04:21.117160  263814 pod_ready.go:86] duration metric: took 424.386126ms for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.294157  263814 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.692685  263814 pod_ready.go:94] pod "kube-scheduler-no-preload-990631" is "Ready"
	I1216 05:04:21.692713  263814 pod_ready.go:86] duration metric: took 398.531676ms for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.692728  263814 pod_ready.go:40] duration metric: took 1.604403534s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:21.753508  263814 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:04:21.755601  263814 out.go:179] * Done! kubectl is now configured to use "no-preload-990631" cluster and "default" namespace by default
	I1216 05:04:17.436627  276917 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:04:17.436875  276917 start.go:159] libmachine.API.Create for "default-k8s-diff-port-375252" (driver="docker")
	I1216 05:04:17.436914  276917 client.go:173] LocalClient.Create starting
	I1216 05:04:17.436985  276917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:04:17.437051  276917 main.go:143] libmachine: Decoding PEM data...
	I1216 05:04:17.437082  276917 main.go:143] libmachine: Parsing certificate...
	I1216 05:04:17.437170  276917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:04:17.437205  276917 main.go:143] libmachine: Decoding PEM data...
	I1216 05:04:17.437225  276917 main.go:143] libmachine: Parsing certificate...
	I1216 05:04:17.437651  276917 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:04:17.455879  276917 cli_runner.go:211] docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:04:17.455943  276917 network_create.go:284] running [docker network inspect default-k8s-diff-port-375252] to gather additional debugging logs...
	I1216 05:04:17.455978  276917 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252
	W1216 05:04:17.474855  276917 cli_runner.go:211] docker network inspect default-k8s-diff-port-375252 returned with exit code 1
	I1216 05:04:17.474894  276917 network_create.go:287] error running [docker network inspect default-k8s-diff-port-375252]: docker network inspect default-k8s-diff-port-375252: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-375252 not found
	I1216 05:04:17.474916  276917 network_create.go:289] output of [docker network inspect default-k8s-diff-port-375252]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-375252 not found
	
	** /stderr **
	I1216 05:04:17.475079  276917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:17.492557  276917 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:04:17.493272  276917 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:04:17.493926  276917 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:04:17.494734  276917 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e6db20}
	I1216 05:04:17.494760  276917 network_create.go:124] attempt to create docker network default-k8s-diff-port-375252 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 05:04:17.494808  276917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 default-k8s-diff-port-375252
	I1216 05:04:17.545363  276917 network_create.go:108] docker network default-k8s-diff-port-375252 192.168.76.0/24 created
	I1216 05:04:17.545398  276917 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-375252" container
	I1216 05:04:17.545470  276917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:04:17.564970  276917 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-375252 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:04:17.585119  276917 oci.go:103] Successfully created a docker volume default-k8s-diff-port-375252
	I1216 05:04:17.585206  276917 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-375252-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --entrypoint /usr/bin/test -v default-k8s-diff-port-375252:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:04:17.982031  276917 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-375252
	I1216 05:04:17.982123  276917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:17.982140  276917 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:04:17.982213  276917 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-375252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:04:21.241223  276917 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-375252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.258940978s)
	I1216 05:04:21.241263  276917 kic.go:203] duration metric: took 3.259119741s to extract preloaded images to volume ...
	W1216 05:04:21.241349  276917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:04:21.241392  276917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:04:21.241431  276917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:04:21.300391  276917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-375252 --name default-k8s-diff-port-375252 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --network default-k8s-diff-port-375252 --ip 192.168.76.2 --volume default-k8s-diff-port-375252:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 05:04:21.582823  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Running}}
	I1216 05:04:21.603080  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:04:21.624563  276917 cli_runner.go:164] Run: docker exec default-k8s-diff-port-375252 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:04:21.675834  276917 oci.go:144] the created container "default-k8s-diff-port-375252" has a running status.
	I1216 05:04:21.675877  276917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa...
	I1216 05:04:21.766702  276917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:04:21.798528  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:04:21.828950  276917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:04:21.828986  276917 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-375252 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:04:21.885920  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:04:21.919598  276917 machine.go:94] provisionDockerMachine start ...
	I1216 05:04:21.919857  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:21.962988  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:21.964342  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:21.964820  276917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:04:22.114886  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:04:22.114914  276917 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-375252"
	I1216 05:04:22.114993  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.137162  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:22.137459  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:22.137484  276917 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-375252 && echo "default-k8s-diff-port-375252" | sudo tee /etc/hostname
	I1216 05:04:21.459551  275199 cli_runner.go:164] Run: docker network inspect old-k8s-version-545075 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:21.479140  275199 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:04:21.483927  275199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:21.495746  275199 kubeadm.go:884] updating cluster {Name:old-k8s-version-545075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-545075 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:04:21.495891  275199 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 05:04:21.495950  275199 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:21.530070  275199 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:21.530098  275199 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:04:21.530156  275199 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:21.557449  275199 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:21.557476  275199 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:04:21.557485  275199 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1216 05:04:21.557627  275199 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-545075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-545075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:04:21.557717  275199 ssh_runner.go:195] Run: crio config
	I1216 05:04:21.607486  275199 cni.go:84] Creating CNI manager for ""
	I1216 05:04:21.607516  275199 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:21.607532  275199 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:04:21.607561  275199 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-545075 NodeName:old-k8s-version-545075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:04:21.607735  275199 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-545075"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:04:21.607811  275199 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1216 05:04:21.616903  275199 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:04:21.616971  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:04:21.626299  275199 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1216 05:04:21.639952  275199 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:04:21.656779  275199 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1216 05:04:21.670595  275199 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:04:21.675320  275199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:21.686561  275199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:21.787121  275199 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:21.808625  275199 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075 for IP: 192.168.85.2
	I1216 05:04:21.808654  275199 certs.go:195] generating shared ca certs ...
	I1216 05:04:21.808685  275199 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:21.808863  275199 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:04:21.808919  275199 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:04:21.808931  275199 certs.go:257] generating profile certs ...
	I1216 05:04:21.809070  275199 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/client.key
	I1216 05:04:21.809165  275199 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/apiserver.key.32d58d38
	I1216 05:04:21.809227  275199 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/proxy-client.key
	I1216 05:04:21.809376  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:04:21.809428  275199 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:04:21.809441  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:04:21.809477  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:04:21.809572  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:04:21.809609  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:04:21.809681  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:21.810539  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:04:21.851608  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:04:21.877924  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:04:21.903751  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:04:21.945572  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 05:04:21.984966  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:04:22.012900  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:04:22.036264  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:04:22.059251  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:04:22.079477  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:04:22.099121  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:04:22.118511  275199 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:04:22.135116  275199 ssh_runner.go:195] Run: openssl version
	I1216 05:04:22.142797  275199 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.150894  275199 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:04:22.158835  275199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.162831  275199 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.162886  275199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.203226  275199 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:04:22.211531  275199 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.219189  275199 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:04:22.227007  275199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.230749  275199 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.230805  275199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.267953  275199 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:04:22.276060  275199 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.285205  275199 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:04:22.293656  275199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.298123  275199 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.298177  275199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.347282  275199 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:22.355972  275199 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:04:22.360277  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:04:22.409345  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:04:22.452563  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:04:22.497547  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:04:22.555583  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:04:22.596449  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:04:22.659677  275199 kubeadm.go:401] StartCluster: {Name:old-k8s-version-545075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-545075 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:22.659799  275199 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:04:22.659867  275199 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:04:22.691586  275199 cri.go:89] found id: "01c4e82517b36d1f66b8f2d1adb4883fc3cc6a22d531ab5783d092744f69a8d2"
	I1216 05:04:22.691611  275199 cri.go:89] found id: "5aeeae98995f110f455b137eea822f077ed8722bcfac6f2d52cb7baa211a4f13"
	I1216 05:04:22.691617  275199 cri.go:89] found id: "31f7764ec0af58c2787749a48d903be38a43b0773a2a9b369902c7942218dc65"
	I1216 05:04:22.691623  275199 cri.go:89] found id: "af4a752092a6dfab68baa25810fa6bf8af6321385deaa1ca39c71f745e63993c"
	I1216 05:04:22.691627  275199 cri.go:89] found id: ""
	I1216 05:04:22.691676  275199 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:04:22.704071  275199 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:04:22Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:04:22.704197  275199 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:04:22.713981  275199 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:04:22.714001  275199 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:04:22.714056  275199 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:04:22.723309  275199 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:04:22.724549  275199 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-545075" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:22.725419  275199 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-545075" cluster setting kubeconfig missing "old-k8s-version-545075" context setting]
	I1216 05:04:22.726579  275199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:22.728895  275199 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:04:22.736730  275199 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 05:04:22.736764  275199 kubeadm.go:602] duration metric: took 22.756696ms to restartPrimaryControlPlane
	I1216 05:04:22.736776  275199 kubeadm.go:403] duration metric: took 77.108414ms to StartCluster
	I1216 05:04:22.736791  275199 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:22.736855  275199 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:22.738289  275199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:22.738531  275199 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:04:22.738730  275199 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:04:22.738813  275199 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-545075"
	I1216 05:04:22.738783  275199 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:04:22.738844  275199 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-545075"
	W1216 05:04:22.738858  275199 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:04:22.738885  275199 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-545075"
	I1216 05:04:22.738895  275199 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:04:22.738898  275199 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-545075"
	I1216 05:04:22.739231  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:22.738821  275199 addons.go:70] Setting dashboard=true in profile "old-k8s-version-545075"
	I1216 05:04:22.739385  275199 addons.go:239] Setting addon dashboard=true in "old-k8s-version-545075"
	I1216 05:04:22.739391  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	W1216 05:04:22.739397  275199 addons.go:248] addon dashboard should already be in state true
	I1216 05:04:22.739429  275199 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:04:22.739904  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:22.740393  275199 out.go:179] * Verifying Kubernetes components...
	I1216 05:04:22.741539  275199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:22.770093  275199 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-545075"
	W1216 05:04:22.770115  275199 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:04:22.770131  275199 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:04:22.770210  275199 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:04:22.770142  275199 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:04:22.770676  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:22.773651  275199 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:22.773670  275199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:04:22.773725  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:22.775049  275199 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:04:22.776272  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:04:22.776292  275199 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:04:22.776344  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:22.805209  275199 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:22.805232  275199 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:04:22.805332  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:22.810010  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:22.817004  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:22.836284  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:22.897124  275199 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:22.912387  275199 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-545075" to be "Ready" ...
	I1216 05:04:22.925502  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:04:22.925529  275199 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:04:22.935387  275199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:22.941896  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:04:22.941923  275199 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:04:22.954671  275199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:22.959230  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:04:22.959255  275199 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:04:22.976514  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:04:22.976539  275199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:04:23.000990  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:04:23.001040  275199 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:04:23.021657  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:04:23.021706  275199 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:04:23.041161  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:04:23.041190  275199 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:04:23.058468  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:04:23.058499  275199 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:04:23.075948  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:04:23.075971  275199 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:04:23.091227  275199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:04:22.277840  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:04:22.277944  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.299040  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:22.299323  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:22.299356  276917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-375252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-375252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-375252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:04:22.435916  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:04:22.435950  276917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:04:22.435995  276917 ubuntu.go:190] setting up certificates
	I1216 05:04:22.436057  276917 provision.go:84] configureAuth start
	I1216 05:04:22.436130  276917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:04:22.460339  276917 provision.go:143] copyHostCerts
	I1216 05:04:22.460406  276917 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:04:22.460418  276917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:04:22.460500  276917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:04:22.460624  276917 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:04:22.460637  276917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:04:22.460680  276917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:04:22.460781  276917 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:04:22.460792  276917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:04:22.460832  276917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:04:22.460907  276917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-375252 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-375252 localhost minikube]
	I1216 05:04:22.600360  276917 provision.go:177] copyRemoteCerts
	I1216 05:04:22.600408  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:04:22.600448  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.620529  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:22.716327  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:04:22.738642  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 05:04:22.768782  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:04:22.807483  276917 provision.go:87] duration metric: took 371.413003ms to configureAuth
	I1216 05:04:22.807527  276917 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:04:22.807718  276917 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:04:22.807822  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.836625  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:22.836938  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:22.836956  276917 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:04:23.167256  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:04:23.167286  276917 machine.go:97] duration metric: took 1.247665995s to provisionDockerMachine
	I1216 05:04:23.167296  276917 client.go:176] duration metric: took 5.730375632s to LocalClient.Create
	I1216 05:04:23.167318  276917 start.go:167] duration metric: took 5.730446026s to libmachine.API.Create "default-k8s-diff-port-375252"
	I1216 05:04:23.167325  276917 start.go:293] postStartSetup for "default-k8s-diff-port-375252" (driver="docker")
	I1216 05:04:23.167334  276917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:04:23.167429  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:04:23.167479  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.188202  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.285282  276917 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:04:23.288865  276917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:04:23.288895  276917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:04:23.288906  276917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:04:23.289041  276917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:04:23.289136  276917 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:04:23.289256  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:04:23.296667  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:23.316596  276917 start.go:296] duration metric: took 149.256463ms for postStartSetup
	I1216 05:04:23.316950  276917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:04:23.336605  276917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:04:23.336859  276917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:04:23.336915  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.356077  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.447461  276917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:04:23.452649  276917 start.go:128] duration metric: took 6.018305012s to createHost
	I1216 05:04:23.452674  276917 start.go:83] releasing machines lock for "default-k8s-diff-port-375252", held for 6.018485345s
	I1216 05:04:23.452745  276917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:04:23.476619  276917 ssh_runner.go:195] Run: cat /version.json
	I1216 05:04:23.476680  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.476693  276917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:04:23.476788  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.498437  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.500332  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.651234  276917 ssh_runner.go:195] Run: systemctl --version
	I1216 05:04:23.657524  276917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:04:23.693545  276917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:04:23.697969  276917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:04:23.698047  276917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:04:23.722709  276917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:04:23.722730  276917 start.go:496] detecting cgroup driver to use...
	I1216 05:04:23.722764  276917 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:04:23.722815  276917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:04:23.738833  276917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:04:23.751240  276917 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:04:23.751297  276917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:04:23.767312  276917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:04:23.791192  276917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:04:23.902175  276917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:04:24.029578  276917 docker.go:234] disabling docker service ...
	I1216 05:04:24.029650  276917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:04:24.052284  276917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:04:24.067563  276917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:04:24.170602  276917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:04:24.265610  276917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:04:24.278835  276917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:04:24.294466  276917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:04:24.294531  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.307631  276917 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:04:24.307683  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.317271  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.326666  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.336300  276917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:04:24.345129  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.354579  276917 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.370064  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.381197  276917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:04:24.389709  276917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:04:24.397476  276917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:24.485925  276917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:04:24.628862  276917 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:04:24.628933  276917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:04:24.633240  276917 start.go:564] Will wait 60s for crictl version
	I1216 05:04:24.633305  276917 ssh_runner.go:195] Run: which crictl
	I1216 05:04:24.636923  276917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:04:24.663957  276917 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:04:24.664056  276917 ssh_runner.go:195] Run: crio --version
	I1216 05:04:24.700996  276917 ssh_runner.go:195] Run: crio --version
	I1216 05:04:24.732232  276917 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1216 05:04:20.371773  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	W1216 05:04:22.871333  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	I1216 05:04:24.876293  275199 node_ready.go:49] node "old-k8s-version-545075" is "Ready"
	I1216 05:04:24.876324  275199 node_ready.go:38] duration metric: took 1.963908874s for node "old-k8s-version-545075" to be "Ready" ...
	I1216 05:04:24.876341  275199 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:04:24.876412  275199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:25.660839  275199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.725414856s)
	I1216 05:04:25.660861  275199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.706155903s)
	I1216 05:04:25.968010  275199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.876729645s)
	I1216 05:04:25.968232  275199 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.091795126s)
	I1216 05:04:25.968257  275199 api_server.go:72] duration metric: took 3.22966823s to wait for apiserver process to appear ...
	I1216 05:04:25.968264  275199 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:04:25.968288  275199 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:04:25.969454  275199 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-545075 addons enable metrics-server
	
	I1216 05:04:25.970995  275199 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1216 05:04:24.733590  276917 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:24.763658  276917 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:04:24.769402  276917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:24.783134  276917 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:04:24.783277  276917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:24.783337  276917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:24.822850  276917 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:24.822870  276917 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:04:24.822944  276917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:24.848386  276917 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:24.848411  276917 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:04:24.848418  276917 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1216 05:04:24.848523  276917 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-375252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:04:24.848603  276917 ssh_runner.go:195] Run: crio config
	I1216 05:04:24.930792  276917 cni.go:84] Creating CNI manager for ""
	I1216 05:04:24.930823  276917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:24.930844  276917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:04:24.930873  276917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-375252 NodeName:default-k8s-diff-port-375252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:04:24.931033  276917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-375252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:04:24.931108  276917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:04:24.945621  276917 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:04:24.945697  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:04:24.960512  276917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1216 05:04:24.975778  276917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:04:24.992817  276917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1216 05:04:25.009213  276917 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:04:25.014994  276917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:25.028770  276917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:25.123250  276917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:25.145673  276917 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252 for IP: 192.168.76.2
	I1216 05:04:25.145695  276917 certs.go:195] generating shared ca certs ...
	I1216 05:04:25.145715  276917 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.145877  276917 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:04:25.145932  276917 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:04:25.145946  276917 certs.go:257] generating profile certs ...
	I1216 05:04:25.146039  276917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key
	I1216 05:04:25.146059  276917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.crt with IP's: []
	I1216 05:04:25.364495  276917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.crt ...
	I1216 05:04:25.364530  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.crt: {Name:mkb527b76296aab2183819f30df28a31ddcd94af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.364732  276917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key ...
	I1216 05:04:25.364762  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key: {Name:mk134a349e31e121726c23cdd4c6546051879822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.364868  276917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f
	I1216 05:04:25.364897  276917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 05:04:25.458506  276917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f ...
	I1216 05:04:25.458533  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f: {Name:mkd6c2cbf0f0ab527556172af63a11310409cae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.458679  276917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f ...
	I1216 05:04:25.458692  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f: {Name:mk70aea777a1acbdd01425c98e4088c44247bf2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.458790  276917 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt
	I1216 05:04:25.458910  276917 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key
	I1216 05:04:25.458999  276917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key
	I1216 05:04:25.459029  276917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt with IP's: []
	I1216 05:04:25.497643  276917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt ...
	I1216 05:04:25.497672  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt: {Name:mkd474b3b48dc58c95c68dc62592233f6eb9d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.497822  276917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key ...
	I1216 05:04:25.497834  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key: {Name:mk24d0a80b44da0c838d5cc5a94198b14104e1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.498013  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:04:25.498074  276917 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:04:25.498085  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:04:25.498109  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:04:25.498133  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:04:25.498156  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:04:25.498198  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:25.498790  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:04:25.519438  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:04:25.539339  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:04:25.561331  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:04:25.586823  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 05:04:25.605210  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:04:25.628909  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:04:25.658176  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:04:25.679409  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:04:25.700819  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:04:25.717696  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:04:25.734330  276917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:04:25.746570  276917 ssh_runner.go:195] Run: openssl version
	I1216 05:04:25.752648  276917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.761487  276917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:04:25.769167  276917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.773313  276917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.773372  276917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.812271  276917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:04:25.820049  276917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:04:25.827677  276917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.835217  276917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:04:25.843522  276917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.847476  276917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.847527  276917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.897898  276917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:04:25.907457  276917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:04:25.916732  276917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.925992  276917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:04:25.935801  276917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.940290  276917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.940359  276917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.991755  276917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:26.001450  276917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:26.013165  276917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:04:26.018276  276917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:04:26.018339  276917 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:26.018426  276917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:04:26.018482  276917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:04:26.052967  276917 cri.go:89] found id: ""
	I1216 05:04:26.053066  276917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:04:26.062503  276917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:04:26.072475  276917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:04:26.072928  276917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:04:26.081406  276917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:04:26.081431  276917 kubeadm.go:158] found existing configuration files:
	
	I1216 05:04:26.081473  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 05:04:26.088932  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:04:26.088992  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:04:26.096683  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 05:04:26.103972  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:04:26.104049  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:04:26.111284  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 05:04:26.119413  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:04:26.119462  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:04:26.126734  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 05:04:26.134473  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:04:26.134544  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:04:26.142567  276917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:04:26.201201  276917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:04:26.260984  276917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:04:25.972233  275199 addons.go:530] duration metric: took 3.233501902s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1216 05:04:25.973474  275199 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1216 05:04:25.973495  275199 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1216 05:04:26.469065  275199 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:04:26.473760  275199 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:04:26.475224  275199 api_server.go:141] control plane version: v1.28.0
	I1216 05:04:26.475251  275199 api_server.go:131] duration metric: took 506.979362ms to wait for apiserver health ...
	I1216 05:04:26.475263  275199 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:04:26.479253  275199 system_pods.go:59] 8 kube-system pods found
	I1216 05:04:26.479297  275199 system_pods.go:61] "coredns-5dd5756b68-fplzg" [c0e4aa41-d754-405c-83b6-53a5dd2fc511] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:26.479313  275199 system_pods.go:61] "etcd-old-k8s-version-545075" [da9d0634-efc9-4a8e-9c8d-f4d8bb9f80c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:04:26.479324  275199 system_pods.go:61] "kindnet-r74zw" [7905ab47-f1e5-4003-bc10-05de82340a76] Running
	I1216 05:04:26.479334  275199 system_pods.go:61] "kube-apiserver-old-k8s-version-545075" [ba5240be-0e87-43f4-b06d-ee5f28aa9d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:26.479345  275199 system_pods.go:61] "kube-controller-manager-old-k8s-version-545075" [f167bb45-3b1b-4c6b-80ab-b84d3c84b61d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:04:26.479353  275199 system_pods.go:61] "kube-proxy-7pldc" [0fdb902a-9463-46da-835f-598868f4d79b] Running
	I1216 05:04:26.479362  275199 system_pods.go:61] "kube-scheduler-old-k8s-version-545075" [0e0a438f-7c47-4270-a5c0-ebbef604d058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:04:26.479369  275199 system_pods.go:61] "storage-provisioner" [6ee6ea7e-845f-4594-8bbe-a0c45e39e694] Running
	I1216 05:04:26.479376  275199 system_pods.go:74] duration metric: took 4.106714ms to wait for pod list to return data ...
	I1216 05:04:26.479388  275199 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:04:26.481829  275199 default_sa.go:45] found service account: "default"
	I1216 05:04:26.481847  275199 default_sa.go:55] duration metric: took 2.452565ms for default service account to be created ...
	I1216 05:04:26.481857  275199 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:04:26.485280  275199 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:26.485308  275199 system_pods.go:89] "coredns-5dd5756b68-fplzg" [c0e4aa41-d754-405c-83b6-53a5dd2fc511] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:26.485319  275199 system_pods.go:89] "etcd-old-k8s-version-545075" [da9d0634-efc9-4a8e-9c8d-f4d8bb9f80c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:04:26.485327  275199 system_pods.go:89] "kindnet-r74zw" [7905ab47-f1e5-4003-bc10-05de82340a76] Running
	I1216 05:04:26.485336  275199 system_pods.go:89] "kube-apiserver-old-k8s-version-545075" [ba5240be-0e87-43f4-b06d-ee5f28aa9d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:26.485347  275199 system_pods.go:89] "kube-controller-manager-old-k8s-version-545075" [f167bb45-3b1b-4c6b-80ab-b84d3c84b61d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:04:26.485352  275199 system_pods.go:89] "kube-proxy-7pldc" [0fdb902a-9463-46da-835f-598868f4d79b] Running
	I1216 05:04:26.485365  275199 system_pods.go:89] "kube-scheduler-old-k8s-version-545075" [0e0a438f-7c47-4270-a5c0-ebbef604d058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:04:26.485373  275199 system_pods.go:89] "storage-provisioner" [6ee6ea7e-845f-4594-8bbe-a0c45e39e694] Running
	I1216 05:04:26.485382  275199 system_pods.go:126] duration metric: took 3.518714ms to wait for k8s-apps to be running ...
	I1216 05:04:26.485393  275199 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:04:26.485439  275199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:04:26.498440  275199 system_svc.go:56] duration metric: took 13.042159ms WaitForService to wait for kubelet
	I1216 05:04:26.498462  275199 kubeadm.go:587] duration metric: took 3.759872248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:26.498480  275199 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:04:26.500724  275199 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:04:26.500745  275199 node_conditions.go:123] node cpu capacity is 8
	I1216 05:04:26.500761  275199 node_conditions.go:105] duration metric: took 2.276204ms to run NodePressure ...
	I1216 05:04:26.500777  275199 start.go:242] waiting for startup goroutines ...
	I1216 05:04:26.500791  275199 start.go:247] waiting for cluster config update ...
	I1216 05:04:26.500808  275199 start.go:256] writing updated cluster config ...
	I1216 05:04:26.501138  275199 ssh_runner.go:195] Run: rm -f paused
	I1216 05:04:26.504835  275199 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:26.509152  275199 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fplzg" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:04:24.873134  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	I1216 05:04:27.371994  268657 node_ready.go:49] node "embed-certs-127599" is "Ready"
	I1216 05:04:27.372055  268657 node_ready.go:38] duration metric: took 11.503904826s for node "embed-certs-127599" to be "Ready" ...
	I1216 05:04:27.372073  268657 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:04:27.372130  268657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:27.386158  268657 api_server.go:72] duration metric: took 11.802186015s to wait for apiserver process to appear ...
	I1216 05:04:27.386190  268657 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:04:27.386215  268657 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 05:04:27.391935  268657 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 05:04:27.393000  268657 api_server.go:141] control plane version: v1.34.2
	I1216 05:04:27.393036  268657 api_server.go:131] duration metric: took 6.838ms to wait for apiserver health ...
	I1216 05:04:27.393047  268657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:04:27.396910  268657 system_pods.go:59] 8 kube-system pods found
	I1216 05:04:27.396947  268657 system_pods.go:61] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:27.396958  268657 system_pods.go:61] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:27.396967  268657 system_pods.go:61] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:27.396972  268657 system_pods.go:61] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:27.396984  268657 system_pods.go:61] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:27.396993  268657 system_pods.go:61] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:27.396999  268657 system_pods.go:61] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:27.397012  268657 system_pods.go:61] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:27.397033  268657 system_pods.go:74] duration metric: took 3.977943ms to wait for pod list to return data ...
	I1216 05:04:27.397048  268657 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:04:27.399479  268657 default_sa.go:45] found service account: "default"
	I1216 05:04:27.399501  268657 default_sa.go:55] duration metric: took 2.446359ms for default service account to be created ...
	I1216 05:04:27.399511  268657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:04:27.402249  268657 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:27.402282  268657 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:27.402290  268657 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:27.402299  268657 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:27.402304  268657 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:27.402310  268657 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:27.402316  268657 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:27.402321  268657 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:27.402329  268657 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:27.402356  268657 retry.go:31] will retry after 227.432634ms: missing components: kube-dns
	I1216 05:04:27.634453  268657 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:27.634484  268657 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:27.634493  268657 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:27.634501  268657 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:27.634506  268657 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:27.634512  268657 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:27.634518  268657 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:27.634523  268657 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:27.634531  268657 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:27.634550  268657 retry.go:31] will retry after 388.660227ms: missing components: kube-dns
	I1216 05:04:28.026923  268657 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:28.026958  268657 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Running
	I1216 05:04:28.026964  268657 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:28.026968  268657 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:28.026971  268657 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:28.026975  268657 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:28.026978  268657 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:28.026981  268657 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:28.026985  268657 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Running
	I1216 05:04:28.026997  268657 system_pods.go:126] duration metric: took 627.478445ms to wait for k8s-apps to be running ...
	I1216 05:04:28.027007  268657 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:04:28.027071  268657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:04:28.039666  268657 system_svc.go:56] duration metric: took 12.65066ms WaitForService to wait for kubelet
	I1216 05:04:28.039692  268657 kubeadm.go:587] duration metric: took 12.455724365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:28.039714  268657 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:04:28.042299  268657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:04:28.042327  268657 node_conditions.go:123] node cpu capacity is 8
	I1216 05:04:28.042345  268657 node_conditions.go:105] duration metric: took 2.624162ms to run NodePressure ...
	I1216 05:04:28.042359  268657 start.go:242] waiting for startup goroutines ...
	I1216 05:04:28.042369  268657 start.go:247] waiting for cluster config update ...
	I1216 05:04:28.042386  268657 start.go:256] writing updated cluster config ...
	I1216 05:04:28.042680  268657 ssh_runner.go:195] Run: rm -f paused
	I1216 05:04:28.046428  268657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:28.049554  268657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.053358  268657 pod_ready.go:94] pod "coredns-66bc5c9577-47h64" is "Ready"
	I1216 05:04:28.053383  268657 pod_ready.go:86] duration metric: took 3.805342ms for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.055231  268657 pod_ready.go:83] waiting for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.058536  268657 pod_ready.go:94] pod "etcd-embed-certs-127599" is "Ready"
	I1216 05:04:28.058554  268657 pod_ready.go:86] duration metric: took 3.305483ms for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.060321  268657 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.063481  268657 pod_ready.go:94] pod "kube-apiserver-embed-certs-127599" is "Ready"
	I1216 05:04:28.063498  268657 pod_ready.go:86] duration metric: took 3.162268ms for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.065201  268657 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.450149  268657 pod_ready.go:94] pod "kube-controller-manager-embed-certs-127599" is "Ready"
	I1216 05:04:28.450179  268657 pod_ready.go:86] duration metric: took 384.952566ms for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.650573  268657 pod_ready.go:83] waiting for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.050754  268657 pod_ready.go:94] pod "kube-proxy-j79cw" is "Ready"
	I1216 05:04:29.050781  268657 pod_ready.go:86] duration metric: took 400.183365ms for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.251166  268657 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.650340  268657 pod_ready.go:94] pod "kube-scheduler-embed-certs-127599" is "Ready"
	I1216 05:04:29.650365  268657 pod_ready.go:86] duration metric: took 399.175267ms for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.650377  268657 pod_ready.go:40] duration metric: took 1.603920465s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:29.693455  268657 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:04:29.695415  268657 out.go:179] * Done! kubectl is now configured to use "embed-certs-127599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 05:04:18 no-preload-990631 crio[770]: time="2025-12-16T05:04:18.958459486Z" level=info msg="Starting container: e82b1c684cd47cf2e41275d1ecf218ae3d87b372fefc99e6119ec2f62a727a16" id=664569bd-b4b9-4297-9a92-cde7256369f8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:04:18 no-preload-990631 crio[770]: time="2025-12-16T05:04:18.961081903Z" level=info msg="Started container" PID=2833 containerID=e82b1c684cd47cf2e41275d1ecf218ae3d87b372fefc99e6119ec2f62a727a16 description=kube-system/coredns-7d764666f9-7l2ds/coredns id=664569bd-b4b9-4297-9a92-cde7256369f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=336aa0dc0b09891942f28212cdaac61c18794d2370a4cd73ca072b4d8fce72ee
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.311997062Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3f96cba9-7c13-4cf2-bc1d-3c2e26a2aade name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.312120181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.317260675Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8adec10ecb5ecbe7ebee9634147db3bd3ad50709b35c18246b437eb867f0cd78 UID:bb35de7b-e677-4d9d-9a4e-fb0c49e18b0a NetNS:/var/run/netns/d9156128-319e-4262-8f53-90d031bf64cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a8d40}] Aliases:map[]}"
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.317294902Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.328587874Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8adec10ecb5ecbe7ebee9634147db3bd3ad50709b35c18246b437eb867f0cd78 UID:bb35de7b-e677-4d9d-9a4e-fb0c49e18b0a NetNS:/var/run/netns/d9156128-319e-4262-8f53-90d031bf64cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a8d40}] Aliases:map[]}"
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.328739707Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.32979286Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.331161967Z" level=info msg="Ran pod sandbox 8adec10ecb5ecbe7ebee9634147db3bd3ad50709b35c18246b437eb867f0cd78 with infra container: default/busybox/POD" id=3f96cba9-7c13-4cf2-bc1d-3c2e26a2aade name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.332586939Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e9237008-f139-4b61-9d72-3d46e9e6106e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.33271547Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e9237008-f139-4b61-9d72-3d46e9e6106e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.332749681Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e9237008-f139-4b61-9d72-3d46e9e6106e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.33353626Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dae0f49f-059b-48aa-8a56-98064cd9bb35 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:04:22 no-preload-990631 crio[770]: time="2025-12-16T05:04:22.336839957Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.60443046Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=dae0f49f-059b-48aa-8a56-98064cd9bb35 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.605069733Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=16dcabab-c2b3-45be-8943-89a61a73793f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.606654037Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6a5377d5-8664-4000-bf2f-8092d49f4c7a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.610343511Z" level=info msg="Creating container: default/busybox/busybox" id=69c0fa01-e2ef-4aea-94cb-02f641a10220 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.610440563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.61380657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.614557834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.639806823Z" level=info msg="Created container a4811e7a47671e19f2c06185bc28c1570cf8a29d15f25a04efa085a78850c41a: default/busybox/busybox" id=69c0fa01-e2ef-4aea-94cb-02f641a10220 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.640378038Z" level=info msg="Starting container: a4811e7a47671e19f2c06185bc28c1570cf8a29d15f25a04efa085a78850c41a" id=4ee67f1d-97df-4e5f-846f-37aeec667caf name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:04:23 no-preload-990631 crio[770]: time="2025-12-16T05:04:23.642046212Z" level=info msg="Started container" PID=2908 containerID=a4811e7a47671e19f2c06185bc28c1570cf8a29d15f25a04efa085a78850c41a description=default/busybox/busybox id=4ee67f1d-97df-4e5f-846f-37aeec667caf name=/runtime.v1.RuntimeService/StartContainer sandboxID=8adec10ecb5ecbe7ebee9634147db3bd3ad50709b35c18246b437eb867f0cd78
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a4811e7a47671       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   8adec10ecb5ec       busybox                                     default
	e82b1c684cd47       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   336aa0dc0b098       coredns-7d764666f9-7l2ds                    kube-system
	5885320279f32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   283a95198f997       storage-provisioner                         kube-system
	2c964ff186a65       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   c43214c7b1cb0       kindnet-p27s9                               kube-system
	19dfdee1ceda3       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      26 seconds ago      Running             kube-proxy                0                   4c3c0a4782b2d       kube-proxy-2bjbr                            kube-system
	5d50387ec66ef       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      36 seconds ago      Running             kube-scheduler            0                   415a648a31d6b       kube-scheduler-no-preload-990631            kube-system
	d5c7c7abdd560       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      36 seconds ago      Running             kube-controller-manager   0                   552757ff7033e       kube-controller-manager-no-preload-990631   kube-system
	bdab06fc87211       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   31be294f705e5       etcd-no-preload-990631                      kube-system
	f5820d03c8a53       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      36 seconds ago      Running             kube-apiserver            0                   1c8dfb7fe0bff       kube-apiserver-no-preload-990631            kube-system
	
	
	==> coredns [e82b1c684cd47cf2e41275d1ecf218ae3d87b372fefc99e6119ec2f62a727a16] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47501 - 37808 "HINFO IN 6626490593381842506.5963567697660363635. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019689445s
	
	
	==> describe nodes <==
	Name:               no-preload-990631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-990631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=no-preload-990631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-990631
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:04:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:04:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:04:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:04:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:04:31 +0000   Tue, 16 Dec 2025 05:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-990631
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                fe9a87bf-3ac0-4eb7-ad47-1e9ad27489f8
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-7l2ds                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-990631                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-p27s9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-990631             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-990631    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-2bjbr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-990631             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-990631 event: Registered Node no-preload-990631 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [bdab06fc872111f314b4c91c3b0ce43e4343d168190fb4a1f1f3e185f53c69f0] <==
	{"level":"warn","ts":"2025-12-16T05:03:57.318837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.326069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.333722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.341085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.347815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.373639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.382218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.390544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.397839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:03:57.450966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:19.822438Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.934023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:04:19.822537Z","caller":"traceutil/trace.go:172","msg":"trace[1565307046] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:423; }","duration":"152.084697ms","start":"2025-12-16T05:04:19.670435Z","end":"2025-12-16T05:04:19.822520Z","steps":["trace[1565307046] 'agreement among raft nodes before linearized reading'  (duration: 81.265202ms)","trace[1565307046] 'range keys from in-memory index tree'  (duration: 70.631499ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:04:19.822540Z","caller":"traceutil/trace.go:172","msg":"trace[776984534] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"212.52416ms","start":"2025-12-16T05:04:19.610005Z","end":"2025-12-16T05:04:19.822530Z","steps":["trace[776984534] 'process raft request'  (duration: 141.752091ms)","trace[776984534] 'compare'  (duration: 70.625269ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:04:19.825816Z","caller":"traceutil/trace.go:172","msg":"trace[567069613] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"210.515325ms","start":"2025-12-16T05:04:19.615279Z","end":"2025-12-16T05:04:19.825794Z","steps":["trace[567069613] 'process raft request'  (duration: 210.443272ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:04:19.826237Z","caller":"traceutil/trace.go:172","msg":"trace[1735570878] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"215.903893ms","start":"2025-12-16T05:04:19.610003Z","end":"2025-12-16T05:04:19.825907Z","steps":["trace[1735570878] 'process raft request'  (duration: 215.628573ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:04:20.076951Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.999271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-16T05:04:20.076978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.047921ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:04:20.077045Z","caller":"traceutil/trace.go:172","msg":"trace[1942183880] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:427; }","duration":"234.083004ms","start":"2025-12-16T05:04:19.842923Z","end":"2025-12-16T05:04:20.077006Z","steps":["trace[1942183880] 'agreement among raft nodes before linearized reading'  (duration: 94.983084ms)","trace[1942183880] 'range keys from in-memory index tree'  (duration: 138.984357ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:04:20.077075Z","caller":"traceutil/trace.go:172","msg":"trace[543555305] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:427; }","duration":"213.155268ms","start":"2025-12-16T05:04:19.863907Z","end":"2025-12-16T05:04:20.077063Z","steps":["trace[543555305] 'agreement among raft nodes before linearized reading'  (duration: 74.016894ms)","trace[543555305] 'range keys from in-memory index tree'  (duration: 139.013417ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T05:04:20.077618Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.058036ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790712642420598 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-kv85f\" mod_revision:425 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-kv85f\" value_size:1199 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-kv85f\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-16T05:04:20.077799Z","caller":"traceutil/trace.go:172","msg":"trace[2059817755] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"245.798963ms","start":"2025-12-16T05:04:19.831983Z","end":"2025-12-16T05:04:20.077782Z","steps":["trace[2059817755] 'process raft request'  (duration: 105.957767ms)","trace[2059817755] 'compare'  (duration: 138.979298ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:04:20.077834Z","caller":"traceutil/trace.go:172","msg":"trace[1150378824] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"245.678947ms","start":"2025-12-16T05:04:19.832142Z","end":"2025-12-16T05:04:20.077821Z","steps":["trace[1150378824] 'process raft request'  (duration: 245.54292ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:04:20.077868Z","caller":"traceutil/trace.go:172","msg":"trace[1994832526] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"244.401031ms","start":"2025-12-16T05:04:19.833456Z","end":"2025-12-16T05:04:20.077857Z","steps":["trace[1994832526] 'process raft request'  (duration: 244.320379ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:04:20.078924Z","caller":"traceutil/trace.go:172","msg":"trace[432463816] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"186.291221ms","start":"2025-12-16T05:04:19.891979Z","end":"2025-12-16T05:04:20.078270Z","steps":["trace[432463816] 'process raft request'  (duration: 185.85687ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:04:20.080385Z","caller":"traceutil/trace.go:172","msg":"trace[298024566] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"186.624699ms","start":"2025-12-16T05:04:19.893744Z","end":"2025-12-16T05:04:20.080369Z","steps":["trace[298024566] 'process raft request'  (duration: 184.462953ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:04:32 up 47 min,  0 user,  load average: 3.54, 2.57, 1.83
	Linux no-preload-990631 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2c964ff186a65b867b80cdfbcac6d0129017acb9c67d4ffdb067c5ee66742ca9] <==
	I1216 05:04:08.199312       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:04:08.199556       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 05:04:08.199699       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:04:08.199714       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:04:08.199723       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:04:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:04:08.400573       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:04:08.400618       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:04:08.400632       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:04:08.400772       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:04:08.900917       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:04:08.900957       1 metrics.go:72] Registering metrics
	I1216 05:04:08.901031       1 controller.go:711] "Syncing nftables rules"
	I1216 05:04:18.401663       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:04:18.401716       1 main.go:301] handling current node
	I1216 05:04:28.400869       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:04:28.400900       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f5820d03c8a53d65abc4980a76f6cf872ef06ce3e7d87b4ec1214393cc2eb105] <==
	I1216 05:03:57.942444       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 05:03:57.942499       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:03:57.942724       1 aggregator.go:187] initial CRD sync complete...
	I1216 05:03:57.942737       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 05:03:57.942744       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:03:57.942751       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:03:58.124851       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:03:58.836464       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1216 05:03:58.841205       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1216 05:03:58.841223       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 05:03:59.320666       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:03:59.359815       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:03:59.439163       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 05:03:59.445308       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1216 05:03:59.446244       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:03:59.450116       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:03:59.863525       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:04:00.543790       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:04:00.553204       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 05:04:00.560470       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 05:04:05.521845       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:05.526788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:05.616468       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:04:05.868011       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1216 05:04:31.099924       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:38678: use of closed network connection
	
	
	==> kube-controller-manager [d5c7c7abdd56035c8e84c32c12c3cce78e0aed98264df63b2468cdad3755ac4c] <==
	I1216 05:04:04.681398       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.681309       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.681382       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.681340       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.682086       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.682162       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.682226       1 range_allocator.go:177] "Sending events to api server"
	I1216 05:04:04.682287       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1216 05:04:04.682317       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:04:04.682342       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.682995       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.683044       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.683083       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.683117       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.681330       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.685090       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.685164       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.685281       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.692818       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-990631" podCIDRs=["10.244.0.0/24"]
	I1216 05:04:04.695964       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:04:04.778869       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:04.778891       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 05:04:04.778907       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 05:04:04.796473       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:19.889575       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [19dfdee1ceda35d3ce84e0452c6b41e6276fda0790f18bc268d5330736f6dc48] <==
	I1216 05:04:06.422815       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:04:06.512167       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:04:06.613935       1 shared_informer.go:377] "Caches are synced"
	I1216 05:04:06.613991       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1216 05:04:06.614160       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:04:06.668908       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:04:06.669087       1 server_linux.go:136] "Using iptables Proxier"
	I1216 05:04:06.678712       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:04:06.679607       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 05:04:06.679670       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:04:06.684262       1 config.go:309] "Starting node config controller"
	I1216 05:04:06.684326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:04:06.684358       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:04:06.684362       1 config.go:200] "Starting service config controller"
	I1216 05:04:06.684412       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:04:06.684377       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:04:06.685351       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:04:06.684386       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:04:06.685596       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:04:06.786285       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:04:06.786315       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:04:06.786386       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5d50387ec66efb753798bea7e80e5fd5a0898c9fdb58c4ef6853c834ecefe136] <==
	E1216 05:03:58.760579       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1216 05:03:58.761608       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1216 05:03:58.812983       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1216 05:03:58.814194       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1216 05:03:58.825380       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 05:03:58.826304       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1216 05:03:58.828441       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1216 05:03:58.829442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1216 05:03:58.884901       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1216 05:03:58.885852       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1216 05:03:58.989372       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1216 05:03:58.990473       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1216 05:03:59.029797       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1216 05:03:59.030971       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1216 05:03:59.031937       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1216 05:03:59.032898       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1216 05:03:59.108733       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 05:03:59.109917       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1216 05:03:59.123210       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1216 05:03:59.124250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1216 05:03:59.131414       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1216 05:03:59.132562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1216 05:03:59.146888       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1216 05:03:59.147963       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I1216 05:04:01.585185       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 05:04:05 no-preload-990631 kubelet[2225]: I1216 05:04:05.906241    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj4g4\" (UniqueName: \"kubernetes.io/projected/cf126848-10d8-452c-92f2-fb35ee37b496-kube-api-access-gj4g4\") pod \"kindnet-p27s9\" (UID: \"cf126848-10d8-452c-92f2-fb35ee37b496\") " pod="kube-system/kindnet-p27s9"
	Dec 16 05:04:05 no-preload-990631 kubelet[2225]: I1216 05:04:05.906273    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48px\" (UniqueName: \"kubernetes.io/projected/4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0-kube-api-access-z48px\") pod \"kube-proxy-2bjbr\" (UID: \"4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0\") " pod="kube-system/kube-proxy-2bjbr"
	Dec 16 05:04:05 no-preload-990631 kubelet[2225]: I1216 05:04:05.906300    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf126848-10d8-452c-92f2-fb35ee37b496-cni-cfg\") pod \"kindnet-p27s9\" (UID: \"cf126848-10d8-452c-92f2-fb35ee37b496\") " pod="kube-system/kindnet-p27s9"
	Dec 16 05:04:05 no-preload-990631 kubelet[2225]: I1216 05:04:05.906333    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf126848-10d8-452c-92f2-fb35ee37b496-lib-modules\") pod \"kindnet-p27s9\" (UID: \"cf126848-10d8-452c-92f2-fb35ee37b496\") " pod="kube-system/kindnet-p27s9"
	Dec 16 05:04:06 no-preload-990631 kubelet[2225]: I1216 05:04:06.451628    2225 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2bjbr" podStartSLOduration=1.451613662 podStartE2EDuration="1.451613662s" podCreationTimestamp="2025-12-16 05:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:06.451320026 +0000 UTC m=+6.153521430" watchObservedRunningTime="2025-12-16 05:04:06.451613662 +0000 UTC m=+6.153815052"
	Dec 16 05:04:06 no-preload-990631 kubelet[2225]: E1216 05:04:06.639570    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-990631" containerName="kube-scheduler"
	Dec 16 05:04:06 no-preload-990631 kubelet[2225]: E1216 05:04:06.645240    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-990631" containerName="kube-controller-manager"
	Dec 16 05:04:08 no-preload-990631 kubelet[2225]: E1216 05:04:08.461762    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-990631" containerName="etcd"
	Dec 16 05:04:08 no-preload-990631 kubelet[2225]: I1216 05:04:08.477376    2225 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-p27s9" podStartSLOduration=1.711182526 podStartE2EDuration="3.477356487s" podCreationTimestamp="2025-12-16 05:04:05 +0000 UTC" firstStartedPulling="2025-12-16 05:04:06.224971462 +0000 UTC m=+5.927172835" lastFinishedPulling="2025-12-16 05:04:07.991145426 +0000 UTC m=+7.693346796" observedRunningTime="2025-12-16 05:04:08.444124596 +0000 UTC m=+8.146326010" watchObservedRunningTime="2025-12-16 05:04:08.477356487 +0000 UTC m=+8.179557892"
	Dec 16 05:04:10 no-preload-990631 kubelet[2225]: E1216 05:04:10.069931    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-990631" containerName="kube-apiserver"
	Dec 16 05:04:16 no-preload-990631 kubelet[2225]: E1216 05:04:16.644256    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-990631" containerName="kube-scheduler"
	Dec 16 05:04:16 no-preload-990631 kubelet[2225]: E1216 05:04:16.649904    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-990631" containerName="kube-controller-manager"
	Dec 16 05:04:18 no-preload-990631 kubelet[2225]: E1216 05:04:18.462727    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-990631" containerName="etcd"
	Dec 16 05:04:18 no-preload-990631 kubelet[2225]: I1216 05:04:18.566643    2225 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 16 05:04:18 no-preload-990631 kubelet[2225]: I1216 05:04:18.699714    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec3f5f16-7fd9-480f-8793-d015baf3ee90-tmp\") pod \"storage-provisioner\" (UID: \"ec3f5f16-7fd9-480f-8793-d015baf3ee90\") " pod="kube-system/storage-provisioner"
	Dec 16 05:04:18 no-preload-990631 kubelet[2225]: I1216 05:04:18.699757    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2f7a264-de0f-4597-b46f-e25a6b08115f-config-volume\") pod \"coredns-7d764666f9-7l2ds\" (UID: \"a2f7a264-de0f-4597-b46f-e25a6b08115f\") " pod="kube-system/coredns-7d764666f9-7l2ds"
	Dec 16 05:04:18 no-preload-990631 kubelet[2225]: I1216 05:04:18.699780    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8xd\" (UniqueName: \"kubernetes.io/projected/a2f7a264-de0f-4597-b46f-e25a6b08115f-kube-api-access-lr8xd\") pod \"coredns-7d764666f9-7l2ds\" (UID: \"a2f7a264-de0f-4597-b46f-e25a6b08115f\") " pod="kube-system/coredns-7d764666f9-7l2ds"
	Dec 16 05:04:18 no-preload-990631 kubelet[2225]: I1216 05:04:18.699818    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzs4q\" (UniqueName: \"kubernetes.io/projected/ec3f5f16-7fd9-480f-8793-d015baf3ee90-kube-api-access-qzs4q\") pod \"storage-provisioner\" (UID: \"ec3f5f16-7fd9-480f-8793-d015baf3ee90\") " pod="kube-system/storage-provisioner"
	Dec 16 05:04:19 no-preload-990631 kubelet[2225]: E1216 05:04:19.458310    2225 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7l2ds" containerName="coredns"
	Dec 16 05:04:19 no-preload-990631 kubelet[2225]: I1216 05:04:19.828932    2225 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-7l2ds" podStartSLOduration=14.828911024 podStartE2EDuration="14.828911024s" podCreationTimestamp="2025-12-16 05:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:19.607628713 +0000 UTC m=+19.309830109" watchObservedRunningTime="2025-12-16 05:04:19.828911024 +0000 UTC m=+19.531112416"
	Dec 16 05:04:20 no-preload-990631 kubelet[2225]: E1216 05:04:20.084428    2225 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-990631" containerName="kube-apiserver"
	Dec 16 05:04:20 no-preload-990631 kubelet[2225]: I1216 05:04:20.200506    2225 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.200484618 podStartE2EDuration="14.200484618s" podCreationTimestamp="2025-12-16 05:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:20.084323444 +0000 UTC m=+19.786524835" watchObservedRunningTime="2025-12-16 05:04:20.200484618 +0000 UTC m=+19.902686009"
	Dec 16 05:04:20 no-preload-990631 kubelet[2225]: E1216 05:04:20.462250    2225 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7l2ds" containerName="coredns"
	Dec 16 05:04:21 no-preload-990631 kubelet[2225]: E1216 05:04:21.464517    2225 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7l2ds" containerName="coredns"
	Dec 16 05:04:22 no-preload-990631 kubelet[2225]: I1216 05:04:22.020814    2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srlgw\" (UniqueName: \"kubernetes.io/projected/bb35de7b-e677-4d9d-9a4e-fb0c49e18b0a-kube-api-access-srlgw\") pod \"busybox\" (UID: \"bb35de7b-e677-4d9d-9a4e-fb0c49e18b0a\") " pod="default/busybox"
	
	
	==> storage-provisioner [5885320279f32241de82cc4171581f9cbb91eb8723225b57331f3fa255bf283e] <==
	I1216 05:04:18.969568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:04:18.977434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:04:18.977562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:04:18.980160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:18.987452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:04:18.987685       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:04:18.987806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da628a4a-675d-4b94-b84f-777f27acf7a6", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-990631_05e794b6-d55b-468d-b090-dd7adefa6b19 became leader
	I1216 05:04:18.987962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-990631_05e794b6-d55b-468d-b090-dd7adefa6b19!
	W1216 05:04:18.990729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:18.994604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:04:19.088759       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-990631_05e794b6-d55b-468d-b090-dd7adefa6b19!
	W1216 05:04:21.000627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:21.064233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:23.067746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:23.073192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:25.076576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:25.092762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:27.096413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:27.100531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:29.103897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:29.107626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:31.110693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:31.114815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-990631 -n no-preload-990631
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-990631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.542392ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:04:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-127599 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-127599 describe deploy/metrics-server -n kube-system: exit status 1 (57.655895ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-127599 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-127599
helpers_test.go:244: (dbg) docker inspect embed-certs-127599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c",
	        "Created": "2025-12-16T05:03:55.761761294Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:03:55.797459491Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/hostname",
	        "HostsPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/hosts",
	        "LogPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c-json.log",
	        "Name": "/embed-certs-127599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-127599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-127599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c",
	                "LowerDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-127599",
	                "Source": "/var/lib/docker/volumes/embed-certs-127599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-127599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-127599",
	                "name.minikube.sigs.k8s.io": "embed-certs-127599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "109d684e1759d1fb90892f8d9297137d06e91905080fbd0cb1a76d7c4d9d215e",
	            "SandboxKey": "/var/run/docker/netns/109d684e1759",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-127599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdd74630820b3f4e548e9af2d3fc3c1a7861a593288c72638e11f0a9e6621a7b",
	                    "EndpointID": "d9e74a1de8ace9685b2b6332bf365a4a8658201df61ae74875de5dfc6aaf3a7d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "3a:0b:05:19:31:d0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-127599",
	                        "41c3fa4e1ed9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-127599 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-127599 logs -n 25: (1.188534713s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-548714 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo containerd config dump                                                                                                                                                                                                  │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ ssh     │ -p cilium-548714 sudo crio config                                                                                                                                                                                                             │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	│ delete  │ -p cilium-548714                                                                                                                                                                                                                              │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:02 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p stopped-upgrade-572334                                                                                                                                                                                                                     │ stopped-upgrade-572334       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p cert-expiration-123545                                                                                                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	│ stop    │ -p old-k8s-version-545075 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                  │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                               │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:04:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:04:17.239730  276917 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:04:17.239836  276917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:17.239845  276917 out.go:374] Setting ErrFile to fd 2...
	I1216 05:04:17.239849  276917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:17.240110  276917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:04:17.240562  276917 out.go:368] Setting JSON to false
	I1216 05:04:17.241674  276917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2808,"bootTime":1765858649,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:04:17.241730  276917 start.go:143] virtualization: kvm guest
	I1216 05:04:17.243623  276917 out.go:179] * [default-k8s-diff-port-375252] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:04:17.244714  276917 notify.go:221] Checking for updates...
	I1216 05:04:17.244726  276917 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:04:17.245901  276917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:04:17.247053  276917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:17.248192  276917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:04:17.249225  276917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:04:17.250228  276917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:04:17.251890  276917 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:04:17.252071  276917 config.go:182] Loaded profile config "no-preload-990631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:04:17.252243  276917 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:04:17.252357  276917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:04:17.276781  276917 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:04:17.276886  276917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:17.336528  276917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:04:17.325910031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:17.336636  276917 docker.go:319] overlay module found
	I1216 05:04:17.339812  276917 out.go:179] * Using the docker driver based on user configuration
	I1216 05:04:17.341312  276917 start.go:309] selected driver: docker
	I1216 05:04:17.341327  276917 start.go:927] validating driver "docker" against <nil>
	I1216 05:04:17.341338  276917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:04:17.341908  276917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:17.403292  276917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:04:17.394090177 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:17.403473  276917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:04:17.403670  276917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:17.405247  276917 out.go:179] * Using Docker driver with root privileges
	I1216 05:04:17.406404  276917 cni.go:84] Creating CNI manager for ""
	I1216 05:04:17.406467  276917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:17.406478  276917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:04:17.406548  276917 start.go:353] cluster config:
	{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:17.407726  276917 out.go:179] * Starting "default-k8s-diff-port-375252" primary control-plane node in "default-k8s-diff-port-375252" cluster
	I1216 05:04:17.408731  276917 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:04:17.409733  276917 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:04:17.410754  276917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:17.410801  276917 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:04:17.410823  276917 cache.go:65] Caching tarball of preloaded images
	I1216 05:04:17.410842  276917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:04:17.410934  276917 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:04:17.410950  276917 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:04:17.411090  276917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:04:17.411123  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json: {Name:mk05b7e9410b298d89eb25ce0d100aa97d39be7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:17.433925  276917 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:04:17.433950  276917 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:04:17.433975  276917 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:04:17.434012  276917 start.go:360] acquireMachinesLock for default-k8s-diff-port-375252: {Name:mkb8cd554afbe4d541db5e350d6207558dad5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:04:17.434172  276917 start.go:364] duration metric: took 113.217µs to acquireMachinesLock for "default-k8s-diff-port-375252"
	I1216 05:04:17.434209  276917 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:04:17.434328  276917 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:04:13.623160  275199 out.go:252] * Restarting existing docker container for "old-k8s-version-545075" ...
	I1216 05:04:13.623247  275199 cli_runner.go:164] Run: docker start old-k8s-version-545075
	I1216 05:04:13.908616  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:13.934940  275199 kic.go:430] container "old-k8s-version-545075" state is running.
	I1216 05:04:13.935723  275199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545075
	I1216 05:04:13.958271  275199 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/config.json ...
	I1216 05:04:13.958554  275199 machine.go:94] provisionDockerMachine start ...
	I1216 05:04:13.958647  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:13.977884  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:13.978168  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:13.978185  275199 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:04:13.978810  275199 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44444->127.0.0.1:33076: read: connection reset by peer
	I1216 05:04:17.109284  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-545075
	
	I1216 05:04:17.109315  275199 ubuntu.go:182] provisioning hostname "old-k8s-version-545075"
	I1216 05:04:17.109376  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.128599  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:17.128914  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:17.128938  275199 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-545075 && echo "old-k8s-version-545075" | sudo tee /etc/hostname
	I1216 05:04:17.266529  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-545075
	
	I1216 05:04:17.266606  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.289002  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:17.289275  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:17.289301  275199 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-545075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-545075/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-545075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:04:17.423741  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:04:17.423769  275199 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:04:17.423807  275199 ubuntu.go:190] setting up certificates
	I1216 05:04:17.423823  275199 provision.go:84] configureAuth start
	I1216 05:04:17.423899  275199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545075
	I1216 05:04:17.444658  275199 provision.go:143] copyHostCerts
	I1216 05:04:17.444747  275199 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:04:17.444765  275199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:04:17.444852  275199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:04:17.445003  275199 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:04:17.445029  275199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:04:17.445083  275199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:04:17.445189  275199 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:04:17.445199  275199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:04:17.445237  275199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:04:17.445343  275199 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-545075 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-545075]
	I1216 05:04:17.535353  275199 provision.go:177] copyRemoteCerts
	I1216 05:04:17.535430  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:04:17.535484  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.554976  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:17.651925  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:04:17.671847  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:04:17.690206  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 05:04:17.709206  275199 provision.go:87] duration metric: took 285.363629ms to configureAuth
	I1216 05:04:17.709230  275199 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:04:17.709416  275199 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:04:17.709535  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:17.728840  275199 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:17.729149  275199 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1216 05:04:17.729177  275199 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:04:18.048941  275199 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:04:18.048978  275199 machine.go:97] duration metric: took 4.090396662s to provisionDockerMachine
	I1216 05:04:18.048995  275199 start.go:293] postStartSetup for "old-k8s-version-545075" (driver="docker")
	I1216 05:04:18.049009  275199 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:04:18.049094  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:04:18.049143  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.070716  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:18.174849  275199 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:04:18.178858  275199 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:04:18.178890  275199 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:04:18.178902  275199 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:04:18.178972  275199 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:04:18.179084  275199 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:04:18.179197  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:04:18.187585  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:18.206234  275199 start.go:296] duration metric: took 157.224509ms for postStartSetup
	I1216 05:04:18.206319  275199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:04:18.206375  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.227959  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:18.319324  275199 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:04:18.325627  275199 fix.go:56] duration metric: took 4.72762935s for fixHost
	I1216 05:04:18.325660  275199 start.go:83] releasing machines lock for "old-k8s-version-545075", held for 4.727688258s
	I1216 05:04:18.325730  275199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545075
	I1216 05:04:18.343905  275199 ssh_runner.go:195] Run: cat /version.json
	I1216 05:04:18.343960  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.344051  275199 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:04:18.344124  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:18.366732  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:18.366903  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:16.069242  268657 addons.go:530] duration metric: took 485.243042ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:04:16.371204  268657 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-127599" context rescaled to 1 replicas
	W1216 05:04:17.872334  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	I1216 05:04:18.520711  275199 ssh_runner.go:195] Run: systemctl --version
	I1216 05:04:18.527414  275199 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:04:18.564193  275199 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:04:18.570338  275199 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:04:18.570402  275199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:04:18.579999  275199 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:04:18.580040  275199 start.go:496] detecting cgroup driver to use...
	I1216 05:04:18.580083  275199 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:04:18.580157  275199 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:04:18.599791  275199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:04:18.615000  275199 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:04:18.615077  275199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:04:18.630705  275199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:04:18.643660  275199 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:04:18.742403  275199 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:04:18.833821  275199 docker.go:234] disabling docker service ...
	I1216 05:04:18.833882  275199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:04:18.848486  275199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:04:18.862796  275199 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:04:18.971102  275199 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:04:19.076455  275199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:04:19.089401  275199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:04:19.104200  275199 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1216 05:04:19.104261  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.113550  275199 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:04:19.113612  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.123008  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.132370  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.141529  275199 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:04:19.149888  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.159325  275199 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.170473  275199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:19.180250  275199 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:04:19.188205  275199 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:04:19.195791  275199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:19.275382  275199 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:04:21.353547  275199 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.078110101s)
	I1216 05:04:21.353579  275199 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:04:21.353630  275199 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:04:21.357833  275199 start.go:564] Will wait 60s for crictl version
	I1216 05:04:21.357898  275199 ssh_runner.go:195] Run: which crictl
	I1216 05:04:21.361802  275199 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:04:21.387774  275199 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:04:21.387856  275199 ssh_runner.go:195] Run: crio --version
	I1216 05:04:21.421094  275199 ssh_runner.go:195] Run: crio --version
	I1216 05:04:21.458278  275199 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	W1216 05:04:18.532992  263814 node_ready.go:57] node "no-preload-990631" has "Ready":"False" status (will retry)
	I1216 05:04:19.032914  263814 node_ready.go:49] node "no-preload-990631" is "Ready"
	I1216 05:04:19.032956  263814 node_ready.go:38] duration metric: took 12.503090752s for node "no-preload-990631" to be "Ready" ...
	I1216 05:04:19.032974  263814 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:04:19.033049  263814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:19.045687  263814 api_server.go:72] duration metric: took 12.961240582s to wait for apiserver process to appear ...
	I1216 05:04:19.045719  263814 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:04:19.045746  263814 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:04:19.052682  263814 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 05:04:19.053785  263814 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:04:19.053815  263814 api_server.go:131] duration metric: took 8.088348ms to wait for apiserver health ...
	I1216 05:04:19.053826  263814 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:04:19.057753  263814 system_pods.go:59] 8 kube-system pods found
	I1216 05:04:19.057797  263814 system_pods.go:61] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.057805  263814 system_pods.go:61] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.057811  263814 system_pods.go:61] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.057817  263814 system_pods.go:61] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.057821  263814 system_pods.go:61] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.057825  263814 system_pods.go:61] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.057828  263814 system_pods.go:61] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.057833  263814 system_pods.go:61] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.057838  263814 system_pods.go:74] duration metric: took 4.007046ms to wait for pod list to return data ...
	I1216 05:04:19.057849  263814 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:04:19.060860  263814 default_sa.go:45] found service account: "default"
	I1216 05:04:19.060885  263814 default_sa.go:55] duration metric: took 3.028868ms for default service account to be created ...
	I1216 05:04:19.060895  263814 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:04:19.064065  263814 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:19.064099  263814 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.064108  263814 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.064128  263814 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.064137  263814 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.064159  263814 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.064166  263814 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.064172  263814 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.064179  263814 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.064212  263814 retry.go:31] will retry after 218.018541ms: missing components: kube-dns
	I1216 05:04:19.286358  263814 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:19.286390  263814 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.286395  263814 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.286402  263814 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.286407  263814 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.286411  263814 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.286415  263814 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.286418  263814 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.286423  263814 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.286436  263814 retry.go:31] will retry after 382.279473ms: missing components: kube-dns
	I1216 05:04:19.825322  263814 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:19.825357  263814 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:19.825367  263814 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running
	I1216 05:04:19.825374  263814 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:04:19.825382  263814 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:19.825387  263814 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running
	I1216 05:04:19.825394  263814 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:04:19.825399  263814 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running
	I1216 05:04:19.825406  263814 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:19.825418  263814 system_pods.go:126] duration metric: took 764.515965ms to wait for k8s-apps to be running ...
	I1216 05:04:19.825431  263814 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:04:19.825481  263814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:04:19.841650  263814 system_svc.go:56] duration metric: took 16.210649ms WaitForService to wait for kubelet
	I1216 05:04:19.841678  263814 kubeadm.go:587] duration metric: took 13.757240355s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:19.841697  263814 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:04:20.078616  263814 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:04:20.078651  263814 node_conditions.go:123] node cpu capacity is 8
	I1216 05:04:20.078671  263814 node_conditions.go:105] duration metric: took 236.969298ms to run NodePressure ...
	I1216 05:04:20.078690  263814 start.go:242] waiting for startup goroutines ...
	I1216 05:04:20.078699  263814 start.go:247] waiting for cluster config update ...
	I1216 05:04:20.078711  263814 start.go:256] writing updated cluster config ...
	I1216 05:04:20.079046  263814 ssh_runner.go:195] Run: rm -f paused
	I1216 05:04:20.088292  263814 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:20.177230  263814 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.202814  263814 pod_ready.go:94] pod "coredns-7d764666f9-7l2ds" is "Ready"
	I1216 05:04:20.202846  263814 pod_ready.go:86] duration metric: took 25.583765ms for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.205136  263814 pod_ready.go:83] waiting for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.209844  263814 pod_ready.go:94] pod "etcd-no-preload-990631" is "Ready"
	I1216 05:04:20.209866  263814 pod_ready.go:86] duration metric: took 4.708808ms for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.211785  263814 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.215774  263814 pod_ready.go:94] pod "kube-apiserver-no-preload-990631" is "Ready"
	I1216 05:04:20.215801  263814 pod_ready.go:86] duration metric: took 3.995361ms for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.217762  263814 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.492729  263814 pod_ready.go:94] pod "kube-controller-manager-no-preload-990631" is "Ready"
	I1216 05:04:20.492755  263814 pod_ready.go:86] duration metric: took 274.97408ms for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:20.692751  263814 pod_ready.go:83] waiting for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.117132  263814 pod_ready.go:94] pod "kube-proxy-2bjbr" is "Ready"
	I1216 05:04:21.117160  263814 pod_ready.go:86] duration metric: took 424.386126ms for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.294157  263814 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.692685  263814 pod_ready.go:94] pod "kube-scheduler-no-preload-990631" is "Ready"
	I1216 05:04:21.692713  263814 pod_ready.go:86] duration metric: took 398.531676ms for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:21.692728  263814 pod_ready.go:40] duration metric: took 1.604403534s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:21.753508  263814 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:04:21.755601  263814 out.go:179] * Done! kubectl is now configured to use "no-preload-990631" cluster and "default" namespace by default
	I1216 05:04:17.436627  276917 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:04:17.436875  276917 start.go:159] libmachine.API.Create for "default-k8s-diff-port-375252" (driver="docker")
	I1216 05:04:17.436914  276917 client.go:173] LocalClient.Create starting
	I1216 05:04:17.436985  276917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:04:17.437051  276917 main.go:143] libmachine: Decoding PEM data...
	I1216 05:04:17.437082  276917 main.go:143] libmachine: Parsing certificate...
	I1216 05:04:17.437170  276917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:04:17.437205  276917 main.go:143] libmachine: Decoding PEM data...
	I1216 05:04:17.437225  276917 main.go:143] libmachine: Parsing certificate...
	I1216 05:04:17.437651  276917 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:04:17.455879  276917 cli_runner.go:211] docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:04:17.455943  276917 network_create.go:284] running [docker network inspect default-k8s-diff-port-375252] to gather additional debugging logs...
	I1216 05:04:17.455978  276917 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252
	W1216 05:04:17.474855  276917 cli_runner.go:211] docker network inspect default-k8s-diff-port-375252 returned with exit code 1
	I1216 05:04:17.474894  276917 network_create.go:287] error running [docker network inspect default-k8s-diff-port-375252]: docker network inspect default-k8s-diff-port-375252: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-375252 not found
	I1216 05:04:17.474916  276917 network_create.go:289] output of [docker network inspect default-k8s-diff-port-375252]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-375252 not found
	
	** /stderr **
	I1216 05:04:17.475079  276917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:17.492557  276917 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:04:17.493272  276917 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:04:17.493926  276917 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:04:17.494734  276917 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e6db20}
	I1216 05:04:17.494760  276917 network_create.go:124] attempt to create docker network default-k8s-diff-port-375252 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 05:04:17.494808  276917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 default-k8s-diff-port-375252
	I1216 05:04:17.545363  276917 network_create.go:108] docker network default-k8s-diff-port-375252 192.168.76.0/24 created
	I1216 05:04:17.545398  276917 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-375252" container
	I1216 05:04:17.545470  276917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:04:17.564970  276917 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-375252 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:04:17.585119  276917 oci.go:103] Successfully created a docker volume default-k8s-diff-port-375252
	I1216 05:04:17.585206  276917 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-375252-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --entrypoint /usr/bin/test -v default-k8s-diff-port-375252:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:04:17.982031  276917 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-375252
	I1216 05:04:17.982123  276917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:17.982140  276917 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:04:17.982213  276917 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-375252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:04:21.241223  276917 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-375252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.258940978s)
	I1216 05:04:21.241263  276917 kic.go:203] duration metric: took 3.259119741s to extract preloaded images to volume ...
	W1216 05:04:21.241349  276917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:04:21.241392  276917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:04:21.241431  276917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:04:21.300391  276917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-375252 --name default-k8s-diff-port-375252 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-375252 --network default-k8s-diff-port-375252 --ip 192.168.76.2 --volume default-k8s-diff-port-375252:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 05:04:21.582823  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Running}}
	I1216 05:04:21.603080  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:04:21.624563  276917 cli_runner.go:164] Run: docker exec default-k8s-diff-port-375252 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:04:21.675834  276917 oci.go:144] the created container "default-k8s-diff-port-375252" has a running status.
	I1216 05:04:21.675877  276917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa...
	I1216 05:04:21.766702  276917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:04:21.798528  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:04:21.828950  276917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:04:21.828986  276917 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-375252 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:04:21.885920  276917 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:04:21.919598  276917 machine.go:94] provisionDockerMachine start ...
	I1216 05:04:21.919857  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:21.962988  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:21.964342  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:21.964820  276917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:04:22.114886  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:04:22.114914  276917 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-375252"
	I1216 05:04:22.114993  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.137162  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:22.137459  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:22.137484  276917 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-375252 && echo "default-k8s-diff-port-375252" | sudo tee /etc/hostname
	I1216 05:04:21.459551  275199 cli_runner.go:164] Run: docker network inspect old-k8s-version-545075 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:21.479140  275199 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:04:21.483927  275199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:21.495746  275199 kubeadm.go:884] updating cluster {Name:old-k8s-version-545075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-545075 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:04:21.495891  275199 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 05:04:21.495950  275199 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:21.530070  275199 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:21.530098  275199 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:04:21.530156  275199 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:21.557449  275199 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:21.557476  275199 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:04:21.557485  275199 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1216 05:04:21.557627  275199 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-545075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-545075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:04:21.557717  275199 ssh_runner.go:195] Run: crio config
	I1216 05:04:21.607486  275199 cni.go:84] Creating CNI manager for ""
	I1216 05:04:21.607516  275199 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:21.607532  275199 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:04:21.607561  275199 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-545075 NodeName:old-k8s-version-545075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:04:21.607735  275199 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-545075"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:04:21.607811  275199 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1216 05:04:21.616903  275199 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:04:21.616971  275199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:04:21.626299  275199 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1216 05:04:21.639952  275199 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:04:21.656779  275199 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1216 05:04:21.670595  275199 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:04:21.675320  275199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:21.686561  275199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:21.787121  275199 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:21.808625  275199 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075 for IP: 192.168.85.2
	I1216 05:04:21.808654  275199 certs.go:195] generating shared ca certs ...
	I1216 05:04:21.808685  275199 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:21.808863  275199 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:04:21.808919  275199 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:04:21.808931  275199 certs.go:257] generating profile certs ...
	I1216 05:04:21.809070  275199 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/client.key
	I1216 05:04:21.809165  275199 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/apiserver.key.32d58d38
	I1216 05:04:21.809227  275199 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/proxy-client.key
	I1216 05:04:21.809376  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:04:21.809428  275199 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:04:21.809441  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:04:21.809477  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:04:21.809572  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:04:21.809609  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:04:21.809681  275199 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:21.810539  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:04:21.851608  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:04:21.877924  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:04:21.903751  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:04:21.945572  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 05:04:21.984966  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:04:22.012900  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:04:22.036264  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:04:22.059251  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:04:22.079477  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:04:22.099121  275199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:04:22.118511  275199 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:04:22.135116  275199 ssh_runner.go:195] Run: openssl version
	I1216 05:04:22.142797  275199 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.150894  275199 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:04:22.158835  275199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.162831  275199 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.162886  275199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:22.203226  275199 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:04:22.211531  275199 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.219189  275199 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:04:22.227007  275199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.230749  275199 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.230805  275199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:04:22.267953  275199 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:04:22.276060  275199 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.285205  275199 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:04:22.293656  275199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.298123  275199 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.298177  275199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:04:22.347282  275199 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:22.355972  275199 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:04:22.360277  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:04:22.409345  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:04:22.452563  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:04:22.497547  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:04:22.555583  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:04:22.596449  275199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:04:22.659677  275199 kubeadm.go:401] StartCluster: {Name:old-k8s-version-545075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-545075 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:22.659799  275199 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:04:22.659867  275199 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:04:22.691586  275199 cri.go:89] found id: "01c4e82517b36d1f66b8f2d1adb4883fc3cc6a22d531ab5783d092744f69a8d2"
	I1216 05:04:22.691611  275199 cri.go:89] found id: "5aeeae98995f110f455b137eea822f077ed8722bcfac6f2d52cb7baa211a4f13"
	I1216 05:04:22.691617  275199 cri.go:89] found id: "31f7764ec0af58c2787749a48d903be38a43b0773a2a9b369902c7942218dc65"
	I1216 05:04:22.691623  275199 cri.go:89] found id: "af4a752092a6dfab68baa25810fa6bf8af6321385deaa1ca39c71f745e63993c"
	I1216 05:04:22.691627  275199 cri.go:89] found id: ""
	I1216 05:04:22.691676  275199 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:04:22.704071  275199 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:04:22Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:04:22.704197  275199 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:04:22.713981  275199 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:04:22.714001  275199 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:04:22.714056  275199 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:04:22.723309  275199 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:04:22.724549  275199 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-545075" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:22.725419  275199 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-545075" cluster setting kubeconfig missing "old-k8s-version-545075" context setting]
	I1216 05:04:22.726579  275199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:22.728895  275199 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:04:22.736730  275199 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 05:04:22.736764  275199 kubeadm.go:602] duration metric: took 22.756696ms to restartPrimaryControlPlane
	I1216 05:04:22.736776  275199 kubeadm.go:403] duration metric: took 77.108414ms to StartCluster
	I1216 05:04:22.736791  275199 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:22.736855  275199 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:22.738289  275199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:22.738531  275199 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:04:22.738730  275199 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:04:22.738813  275199 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-545075"
	I1216 05:04:22.738783  275199 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:04:22.738844  275199 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-545075"
	W1216 05:04:22.738858  275199 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:04:22.738885  275199 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-545075"
	I1216 05:04:22.738895  275199 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:04:22.738898  275199 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-545075"
	I1216 05:04:22.739231  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:22.738821  275199 addons.go:70] Setting dashboard=true in profile "old-k8s-version-545075"
	I1216 05:04:22.739385  275199 addons.go:239] Setting addon dashboard=true in "old-k8s-version-545075"
	I1216 05:04:22.739391  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	W1216 05:04:22.739397  275199 addons.go:248] addon dashboard should already be in state true
	I1216 05:04:22.739429  275199 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:04:22.739904  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:22.740393  275199 out.go:179] * Verifying Kubernetes components...
	I1216 05:04:22.741539  275199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:22.770093  275199 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-545075"
	W1216 05:04:22.770115  275199 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:04:22.770131  275199 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:04:22.770210  275199 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:04:22.770142  275199 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:04:22.770676  275199 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:04:22.773651  275199 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:22.773670  275199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:04:22.773725  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:22.775049  275199 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:04:22.776272  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:04:22.776292  275199 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:04:22.776344  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:22.805209  275199 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:22.805232  275199 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:04:22.805332  275199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:04:22.810010  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:22.817004  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:22.836284  275199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:04:22.897124  275199 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:22.912387  275199 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-545075" to be "Ready" ...
	I1216 05:04:22.925502  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:04:22.925529  275199 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:04:22.935387  275199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:22.941896  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:04:22.941923  275199 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:04:22.954671  275199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:22.959230  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:04:22.959255  275199 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:04:22.976514  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:04:22.976539  275199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:04:23.000990  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:04:23.001040  275199 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:04:23.021657  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:04:23.021706  275199 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:04:23.041161  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:04:23.041190  275199 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:04:23.058468  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:04:23.058499  275199 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:04:23.075948  275199 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:04:23.075971  275199 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:04:23.091227  275199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:04:22.277840  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:04:22.277944  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.299040  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:22.299323  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:22.299356  276917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-375252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-375252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-375252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:04:22.435916  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:04:22.435950  276917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:04:22.435995  276917 ubuntu.go:190] setting up certificates
	I1216 05:04:22.436057  276917 provision.go:84] configureAuth start
	I1216 05:04:22.436130  276917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:04:22.460339  276917 provision.go:143] copyHostCerts
	I1216 05:04:22.460406  276917 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:04:22.460418  276917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:04:22.460500  276917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:04:22.460624  276917 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:04:22.460637  276917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:04:22.460680  276917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:04:22.460781  276917 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:04:22.460792  276917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:04:22.460832  276917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:04:22.460907  276917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-375252 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-375252 localhost minikube]
	I1216 05:04:22.600360  276917 provision.go:177] copyRemoteCerts
	I1216 05:04:22.600408  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:04:22.600448  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.620529  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:22.716327  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:04:22.738642  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 05:04:22.768782  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:04:22.807483  276917 provision.go:87] duration metric: took 371.413003ms to configureAuth
	I1216 05:04:22.807527  276917 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:04:22.807718  276917 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:04:22.807822  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:22.836625  276917 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:22.836938  276917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1216 05:04:22.836956  276917 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:04:23.167256  276917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:04:23.167286  276917 machine.go:97] duration metric: took 1.247665995s to provisionDockerMachine
	I1216 05:04:23.167296  276917 client.go:176] duration metric: took 5.730375632s to LocalClient.Create
	I1216 05:04:23.167318  276917 start.go:167] duration metric: took 5.730446026s to libmachine.API.Create "default-k8s-diff-port-375252"
	I1216 05:04:23.167325  276917 start.go:293] postStartSetup for "default-k8s-diff-port-375252" (driver="docker")
	I1216 05:04:23.167334  276917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:04:23.167429  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:04:23.167479  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.188202  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.285282  276917 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:04:23.288865  276917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:04:23.288895  276917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:04:23.288906  276917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:04:23.289041  276917 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:04:23.289136  276917 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:04:23.289256  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:04:23.296667  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:23.316596  276917 start.go:296] duration metric: took 149.256463ms for postStartSetup
	I1216 05:04:23.316950  276917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:04:23.336605  276917 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:04:23.336859  276917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:04:23.336915  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.356077  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.447461  276917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:04:23.452649  276917 start.go:128] duration metric: took 6.018305012s to createHost
	I1216 05:04:23.452674  276917 start.go:83] releasing machines lock for "default-k8s-diff-port-375252", held for 6.018485345s
	I1216 05:04:23.452745  276917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:04:23.476619  276917 ssh_runner.go:195] Run: cat /version.json
	I1216 05:04:23.476680  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.476693  276917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:04:23.476788  276917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:04:23.498437  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.500332  276917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:04:23.651234  276917 ssh_runner.go:195] Run: systemctl --version
	I1216 05:04:23.657524  276917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:04:23.693545  276917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:04:23.697969  276917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:04:23.698047  276917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:04:23.722709  276917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:04:23.722730  276917 start.go:496] detecting cgroup driver to use...
	I1216 05:04:23.722764  276917 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:04:23.722815  276917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:04:23.738833  276917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:04:23.751240  276917 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:04:23.751297  276917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:04:23.767312  276917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:04:23.791192  276917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:04:23.902175  276917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:04:24.029578  276917 docker.go:234] disabling docker service ...
	I1216 05:04:24.029650  276917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:04:24.052284  276917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:04:24.067563  276917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:04:24.170602  276917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:04:24.265610  276917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:04:24.278835  276917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:04:24.294466  276917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:04:24.294531  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.307631  276917 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:04:24.307683  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.317271  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.326666  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.336300  276917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:04:24.345129  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.354579  276917 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.370064  276917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:04:24.381197  276917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:04:24.389709  276917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:04:24.397476  276917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:24.485925  276917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:04:24.628862  276917 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:04:24.628933  276917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:04:24.633240  276917 start.go:564] Will wait 60s for crictl version
	I1216 05:04:24.633305  276917 ssh_runner.go:195] Run: which crictl
	I1216 05:04:24.636923  276917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:04:24.663957  276917 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:04:24.664056  276917 ssh_runner.go:195] Run: crio --version
	I1216 05:04:24.700996  276917 ssh_runner.go:195] Run: crio --version
	I1216 05:04:24.732232  276917 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1216 05:04:20.371773  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	W1216 05:04:22.871333  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	I1216 05:04:24.876293  275199 node_ready.go:49] node "old-k8s-version-545075" is "Ready"
	I1216 05:04:24.876324  275199 node_ready.go:38] duration metric: took 1.963908874s for node "old-k8s-version-545075" to be "Ready" ...
	I1216 05:04:24.876341  275199 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:04:24.876412  275199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:25.660839  275199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.725414856s)
	I1216 05:04:25.660861  275199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.706155903s)
	I1216 05:04:25.968010  275199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.876729645s)
	I1216 05:04:25.968232  275199 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.091795126s)
	I1216 05:04:25.968257  275199 api_server.go:72] duration metric: took 3.22966823s to wait for apiserver process to appear ...
	I1216 05:04:25.968264  275199 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:04:25.968288  275199 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:04:25.969454  275199 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-545075 addons enable metrics-server
	
	I1216 05:04:25.970995  275199 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1216 05:04:24.733590  276917 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:24.763658  276917 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:04:24.769402  276917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:24.783134  276917 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:04:24.783277  276917 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:24.783337  276917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:24.822850  276917 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:24.822870  276917 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:04:24.822944  276917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:24.848386  276917 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:24.848411  276917 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:04:24.848418  276917 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1216 05:04:24.848523  276917 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-375252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:04:24.848603  276917 ssh_runner.go:195] Run: crio config
	I1216 05:04:24.930792  276917 cni.go:84] Creating CNI manager for ""
	I1216 05:04:24.930823  276917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:24.930844  276917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:04:24.930873  276917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-375252 NodeName:default-k8s-diff-port-375252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:04:24.931033  276917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-375252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:04:24.931108  276917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:04:24.945621  276917 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:04:24.945697  276917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:04:24.960512  276917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1216 05:04:24.975778  276917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:04:24.992817  276917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1216 05:04:25.009213  276917 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:04:25.014994  276917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:25.028770  276917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:25.123250  276917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:25.145673  276917 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252 for IP: 192.168.76.2
	I1216 05:04:25.145695  276917 certs.go:195] generating shared ca certs ...
	I1216 05:04:25.145715  276917 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.145877  276917 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:04:25.145932  276917 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:04:25.145946  276917 certs.go:257] generating profile certs ...
	I1216 05:04:25.146039  276917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key
	I1216 05:04:25.146059  276917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.crt with IP's: []
	I1216 05:04:25.364495  276917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.crt ...
	I1216 05:04:25.364530  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.crt: {Name:mkb527b76296aab2183819f30df28a31ddcd94af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.364732  276917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key ...
	I1216 05:04:25.364762  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key: {Name:mk134a349e31e121726c23cdd4c6546051879822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.364868  276917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f
	I1216 05:04:25.364897  276917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 05:04:25.458506  276917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f ...
	I1216 05:04:25.458533  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f: {Name:mkd6c2cbf0f0ab527556172af63a11310409cae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.458679  276917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f ...
	I1216 05:04:25.458692  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f: {Name:mk70aea777a1acbdd01425c98e4088c44247bf2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.458790  276917 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt.7bfc343f -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt
	I1216 05:04:25.458910  276917 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key
	I1216 05:04:25.458999  276917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key
	I1216 05:04:25.459029  276917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt with IP's: []
	I1216 05:04:25.497643  276917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt ...
	I1216 05:04:25.497672  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt: {Name:mkd474b3b48dc58c95c68dc62592233f6eb9d1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.497822  276917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key ...
	I1216 05:04:25.497834  276917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key: {Name:mk24d0a80b44da0c838d5cc5a94198b14104e1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:25.498013  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:04:25.498074  276917 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:04:25.498085  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:04:25.498109  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:04:25.498133  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:04:25.498156  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:04:25.498198  276917 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:25.498790  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:04:25.519438  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:04:25.539339  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:04:25.561331  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:04:25.586823  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 05:04:25.605210  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:04:25.628909  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:04:25.658176  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:04:25.679409  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:04:25.700819  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:04:25.717696  276917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:04:25.734330  276917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:04:25.746570  276917 ssh_runner.go:195] Run: openssl version
	I1216 05:04:25.752648  276917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.761487  276917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:04:25.769167  276917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.773313  276917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.773372  276917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:25.812271  276917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:04:25.820049  276917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:04:25.827677  276917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.835217  276917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:04:25.843522  276917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.847476  276917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.847527  276917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:04:25.897898  276917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:04:25.907457  276917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:04:25.916732  276917 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.925992  276917 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:04:25.935801  276917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.940290  276917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.940359  276917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:04:25.991755  276917 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:26.001450  276917 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:26.013165  276917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:04:26.018276  276917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:04:26.018339  276917 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:26.018426  276917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:04:26.018482  276917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:04:26.052967  276917 cri.go:89] found id: ""
	I1216 05:04:26.053066  276917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:04:26.062503  276917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:04:26.072475  276917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:04:26.072928  276917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:04:26.081406  276917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:04:26.081431  276917 kubeadm.go:158] found existing configuration files:
	
	I1216 05:04:26.081473  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 05:04:26.088932  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:04:26.088992  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:04:26.096683  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 05:04:26.103972  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:04:26.104049  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:04:26.111284  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 05:04:26.119413  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:04:26.119462  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:04:26.126734  276917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 05:04:26.134473  276917 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:04:26.134544  276917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:04:26.142567  276917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:04:26.201201  276917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:04:26.260984  276917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:04:25.972233  275199 addons.go:530] duration metric: took 3.233501902s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1216 05:04:25.973474  275199 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1216 05:04:25.973495  275199 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1216 05:04:26.469065  275199 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:04:26.473760  275199 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:04:26.475224  275199 api_server.go:141] control plane version: v1.28.0
	I1216 05:04:26.475251  275199 api_server.go:131] duration metric: took 506.979362ms to wait for apiserver health ...
	I1216 05:04:26.475263  275199 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:04:26.479253  275199 system_pods.go:59] 8 kube-system pods found
	I1216 05:04:26.479297  275199 system_pods.go:61] "coredns-5dd5756b68-fplzg" [c0e4aa41-d754-405c-83b6-53a5dd2fc511] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:26.479313  275199 system_pods.go:61] "etcd-old-k8s-version-545075" [da9d0634-efc9-4a8e-9c8d-f4d8bb9f80c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:04:26.479324  275199 system_pods.go:61] "kindnet-r74zw" [7905ab47-f1e5-4003-bc10-05de82340a76] Running
	I1216 05:04:26.479334  275199 system_pods.go:61] "kube-apiserver-old-k8s-version-545075" [ba5240be-0e87-43f4-b06d-ee5f28aa9d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:26.479345  275199 system_pods.go:61] "kube-controller-manager-old-k8s-version-545075" [f167bb45-3b1b-4c6b-80ab-b84d3c84b61d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:04:26.479353  275199 system_pods.go:61] "kube-proxy-7pldc" [0fdb902a-9463-46da-835f-598868f4d79b] Running
	I1216 05:04:26.479362  275199 system_pods.go:61] "kube-scheduler-old-k8s-version-545075" [0e0a438f-7c47-4270-a5c0-ebbef604d058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:04:26.479369  275199 system_pods.go:61] "storage-provisioner" [6ee6ea7e-845f-4594-8bbe-a0c45e39e694] Running
	I1216 05:04:26.479376  275199 system_pods.go:74] duration metric: took 4.106714ms to wait for pod list to return data ...
	I1216 05:04:26.479388  275199 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:04:26.481829  275199 default_sa.go:45] found service account: "default"
	I1216 05:04:26.481847  275199 default_sa.go:55] duration metric: took 2.452565ms for default service account to be created ...
	I1216 05:04:26.481857  275199 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:04:26.485280  275199 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:26.485308  275199 system_pods.go:89] "coredns-5dd5756b68-fplzg" [c0e4aa41-d754-405c-83b6-53a5dd2fc511] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:26.485319  275199 system_pods.go:89] "etcd-old-k8s-version-545075" [da9d0634-efc9-4a8e-9c8d-f4d8bb9f80c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:04:26.485327  275199 system_pods.go:89] "kindnet-r74zw" [7905ab47-f1e5-4003-bc10-05de82340a76] Running
	I1216 05:04:26.485336  275199 system_pods.go:89] "kube-apiserver-old-k8s-version-545075" [ba5240be-0e87-43f4-b06d-ee5f28aa9d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:04:26.485347  275199 system_pods.go:89] "kube-controller-manager-old-k8s-version-545075" [f167bb45-3b1b-4c6b-80ab-b84d3c84b61d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:04:26.485352  275199 system_pods.go:89] "kube-proxy-7pldc" [0fdb902a-9463-46da-835f-598868f4d79b] Running
	I1216 05:04:26.485365  275199 system_pods.go:89] "kube-scheduler-old-k8s-version-545075" [0e0a438f-7c47-4270-a5c0-ebbef604d058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:04:26.485373  275199 system_pods.go:89] "storage-provisioner" [6ee6ea7e-845f-4594-8bbe-a0c45e39e694] Running
	I1216 05:04:26.485382  275199 system_pods.go:126] duration metric: took 3.518714ms to wait for k8s-apps to be running ...
	I1216 05:04:26.485393  275199 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:04:26.485439  275199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:04:26.498440  275199 system_svc.go:56] duration metric: took 13.042159ms WaitForService to wait for kubelet
	I1216 05:04:26.498462  275199 kubeadm.go:587] duration metric: took 3.759872248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:26.498480  275199 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:04:26.500724  275199 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:04:26.500745  275199 node_conditions.go:123] node cpu capacity is 8
	I1216 05:04:26.500761  275199 node_conditions.go:105] duration metric: took 2.276204ms to run NodePressure ...
	I1216 05:04:26.500777  275199 start.go:242] waiting for startup goroutines ...
	I1216 05:04:26.500791  275199 start.go:247] waiting for cluster config update ...
	I1216 05:04:26.500808  275199 start.go:256] writing updated cluster config ...
	I1216 05:04:26.501138  275199 ssh_runner.go:195] Run: rm -f paused
	I1216 05:04:26.504835  275199 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:26.509152  275199 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fplzg" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:04:24.873134  268657 node_ready.go:57] node "embed-certs-127599" has "Ready":"False" status (will retry)
	I1216 05:04:27.371994  268657 node_ready.go:49] node "embed-certs-127599" is "Ready"
	I1216 05:04:27.372055  268657 node_ready.go:38] duration metric: took 11.503904826s for node "embed-certs-127599" to be "Ready" ...
	I1216 05:04:27.372073  268657 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:04:27.372130  268657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:27.386158  268657 api_server.go:72] duration metric: took 11.802186015s to wait for apiserver process to appear ...
	I1216 05:04:27.386190  268657 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:04:27.386215  268657 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 05:04:27.391935  268657 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 05:04:27.393000  268657 api_server.go:141] control plane version: v1.34.2
	I1216 05:04:27.393036  268657 api_server.go:131] duration metric: took 6.838ms to wait for apiserver health ...
	I1216 05:04:27.393047  268657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:04:27.396910  268657 system_pods.go:59] 8 kube-system pods found
	I1216 05:04:27.396947  268657 system_pods.go:61] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:27.396958  268657 system_pods.go:61] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:27.396967  268657 system_pods.go:61] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:27.396972  268657 system_pods.go:61] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:27.396984  268657 system_pods.go:61] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:27.396993  268657 system_pods.go:61] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:27.396999  268657 system_pods.go:61] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:27.397012  268657 system_pods.go:61] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:27.397033  268657 system_pods.go:74] duration metric: took 3.977943ms to wait for pod list to return data ...
	I1216 05:04:27.397048  268657 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:04:27.399479  268657 default_sa.go:45] found service account: "default"
	I1216 05:04:27.399501  268657 default_sa.go:55] duration metric: took 2.446359ms for default service account to be created ...
	I1216 05:04:27.399511  268657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:04:27.402249  268657 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:27.402282  268657 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:27.402290  268657 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:27.402299  268657 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:27.402304  268657 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:27.402310  268657 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:27.402316  268657 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:27.402321  268657 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:27.402329  268657 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:27.402356  268657 retry.go:31] will retry after 227.432634ms: missing components: kube-dns
	I1216 05:04:27.634453  268657 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:27.634484  268657 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:04:27.634493  268657 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:27.634501  268657 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:27.634506  268657 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:27.634512  268657 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:27.634518  268657 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:27.634523  268657 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:27.634531  268657 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:04:27.634550  268657 retry.go:31] will retry after 388.660227ms: missing components: kube-dns
	I1216 05:04:28.026923  268657 system_pods.go:86] 8 kube-system pods found
	I1216 05:04:28.026958  268657 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Running
	I1216 05:04:28.026964  268657 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running
	I1216 05:04:28.026968  268657 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running
	I1216 05:04:28.026971  268657 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running
	I1216 05:04:28.026975  268657 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running
	I1216 05:04:28.026978  268657 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running
	I1216 05:04:28.026981  268657 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running
	I1216 05:04:28.026985  268657 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Running
	I1216 05:04:28.026997  268657 system_pods.go:126] duration metric: took 627.478445ms to wait for k8s-apps to be running ...
	I1216 05:04:28.027007  268657 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:04:28.027071  268657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:04:28.039666  268657 system_svc.go:56] duration metric: took 12.65066ms WaitForService to wait for kubelet
	I1216 05:04:28.039692  268657 kubeadm.go:587] duration metric: took 12.455724365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:28.039714  268657 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:04:28.042299  268657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:04:28.042327  268657 node_conditions.go:123] node cpu capacity is 8
	I1216 05:04:28.042345  268657 node_conditions.go:105] duration metric: took 2.624162ms to run NodePressure ...
	I1216 05:04:28.042359  268657 start.go:242] waiting for startup goroutines ...
	I1216 05:04:28.042369  268657 start.go:247] waiting for cluster config update ...
	I1216 05:04:28.042386  268657 start.go:256] writing updated cluster config ...
	I1216 05:04:28.042680  268657 ssh_runner.go:195] Run: rm -f paused
	I1216 05:04:28.046428  268657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:28.049554  268657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.053358  268657 pod_ready.go:94] pod "coredns-66bc5c9577-47h64" is "Ready"
	I1216 05:04:28.053383  268657 pod_ready.go:86] duration metric: took 3.805342ms for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.055231  268657 pod_ready.go:83] waiting for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.058536  268657 pod_ready.go:94] pod "etcd-embed-certs-127599" is "Ready"
	I1216 05:04:28.058554  268657 pod_ready.go:86] duration metric: took 3.305483ms for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.060321  268657 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.063481  268657 pod_ready.go:94] pod "kube-apiserver-embed-certs-127599" is "Ready"
	I1216 05:04:28.063498  268657 pod_ready.go:86] duration metric: took 3.162268ms for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.065201  268657 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.450149  268657 pod_ready.go:94] pod "kube-controller-manager-embed-certs-127599" is "Ready"
	I1216 05:04:28.450179  268657 pod_ready.go:86] duration metric: took 384.952566ms for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:28.650573  268657 pod_ready.go:83] waiting for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.050754  268657 pod_ready.go:94] pod "kube-proxy-j79cw" is "Ready"
	I1216 05:04:29.050781  268657 pod_ready.go:86] duration metric: took 400.183365ms for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.251166  268657 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.650340  268657 pod_ready.go:94] pod "kube-scheduler-embed-certs-127599" is "Ready"
	I1216 05:04:29.650365  268657 pod_ready.go:86] duration metric: took 399.175267ms for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:29.650377  268657 pod_ready.go:40] duration metric: took 1.603920465s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:04:29.693455  268657 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:04:29.695415  268657 out.go:179] * Done! kubectl is now configured to use "embed-certs-127599" cluster and "default" namespace by default
	W1216 05:04:28.514749  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	W1216 05:04:30.515164  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	W1216 05:04:32.516008  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	W1216 05:04:35.015282  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	W1216 05:04:37.514851  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 16 05:04:27 embed-certs-127599 crio[777]: time="2025-12-16T05:04:27.426375218Z" level=info msg="Started container" PID=1838 containerID=4ab263ddcb561be5b83b2476b401c12e7230c5a90293dd850c1a8495be96e01f description=kube-system/storage-provisioner/storage-provisioner id=d972d8e9-3fcf-43ab-af8d-02005fd17db9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7f9147aad4f65e1d2c62b322047d0bcb0984886a1ac89664629484eadf688c2
	Dec 16 05:04:27 embed-certs-127599 crio[777]: time="2025-12-16T05:04:27.42821569Z" level=info msg="Started container" PID=1841 containerID=247195a365b6e6f9a23a85e161798569eca4d4cacc0ce86795e0e2a2e7994959 description=kube-system/coredns-66bc5c9577-47h64/coredns id=448559c8-d779-4cdc-9a17-e489e12f1bb5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=513297fc78e4beb49b9375ba66de0bb64498a85ef0659d8319c46fea09253d4b
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.164884848Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7721a673-af48-47b7-b751-a6b38f419faa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.164982684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.172545665Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bfb4cf9fcbc306fbacb4e65002f09341682755ff5d0a0e44772054be95edf26e UID:253698b2-29a2-40df-963b-00ac87e3af9f NetNS:/var/run/netns/ffa3bfab-78a9-426b-b6a9-dfa8ad0a4c95 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004cc4c8}] Aliases:map[]}"
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.172586716Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.182676155Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bfb4cf9fcbc306fbacb4e65002f09341682755ff5d0a0e44772054be95edf26e UID:253698b2-29a2-40df-963b-00ac87e3af9f NetNS:/var/run/netns/ffa3bfab-78a9-426b-b6a9-dfa8ad0a4c95 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004cc4c8}] Aliases:map[]}"
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.182832441Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.183834659Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.184945247Z" level=info msg="Ran pod sandbox bfb4cf9fcbc306fbacb4e65002f09341682755ff5d0a0e44772054be95edf26e with infra container: default/busybox/POD" id=7721a673-af48-47b7-b751-a6b38f419faa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.186150462Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=19bb2c94-7a37-4c01-90c9-aa78faab2583 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.18626826Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=19bb2c94-7a37-4c01-90c9-aa78faab2583 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.186303275Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=19bb2c94-7a37-4c01-90c9-aa78faab2583 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.186999839Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e6261f0-8247-4cad-830a-22676ce4adb5 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:04:30 embed-certs-127599 crio[777]: time="2025-12-16T05:04:30.188568267Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.428525507Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6e6261f0-8247-4cad-830a-22676ce4adb5 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.429362566Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6a817d34-000f-4db1-94fa-28368a67dfe5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.43087997Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c3ded15c-3201-4477-88dd-eb00bb3ec73c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.434274802Z" level=info msg="Creating container: default/busybox/busybox" id=3a266edb-5b06-4382-ad65-1bf223088ac8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.434401486Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.438291029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.438699541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.470321119Z" level=info msg="Created container 54145f77bac256e72af050c6e81bf080b27be62f482d2b4295979e8ee60aa9fc: default/busybox/busybox" id=3a266edb-5b06-4382-ad65-1bf223088ac8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.470962853Z" level=info msg="Starting container: 54145f77bac256e72af050c6e81bf080b27be62f482d2b4295979e8ee60aa9fc" id=a5748e3f-6922-485f-9a1b-282e369a98b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:04:31 embed-certs-127599 crio[777]: time="2025-12-16T05:04:31.473120176Z" level=info msg="Started container" PID=1919 containerID=54145f77bac256e72af050c6e81bf080b27be62f482d2b4295979e8ee60aa9fc description=default/busybox/busybox id=a5748e3f-6922-485f-9a1b-282e369a98b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bfb4cf9fcbc306fbacb4e65002f09341682755ff5d0a0e44772054be95edf26e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	54145f77bac25       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   bfb4cf9fcbc30       busybox                                      default
	247195a365b6e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   513297fc78e4b       coredns-66bc5c9577-47h64                     kube-system
	4ab263ddcb561       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   a7f9147aad4f6       storage-provisioner                          kube-system
	cb9d90885dc5a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   b400500f01ba9       kindnet-nqzjf                                kube-system
	b9f5fb6104c06       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   7fa204c50a98a       kube-proxy-j79cw                             kube-system
	ac215b9d04cf7       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   f7770a11c5f31       kube-controller-manager-embed-certs-127599   kube-system
	93d8d86ace609       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   fc65606279010       etcd-embed-certs-127599                      kube-system
	cf5b628017995       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   43aea5bdf830c       kube-apiserver-embed-certs-127599            kube-system
	95c998ea896ef       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   dbd473d8d94a1       kube-scheduler-embed-certs-127599            kube-system
	
	
	==> coredns [247195a365b6e6f9a23a85e161798569eca4d4cacc0ce86795e0e2a2e7994959] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52790 - 39890 "HINFO IN 3398528516605781821.4259090379221980373. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017686549s
	
	
	==> describe nodes <==
	Name:               embed-certs-127599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-127599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=embed-certs-127599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:04:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-127599
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:04:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:04:27 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:04:27 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:04:27 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:04:27 +0000   Tue, 16 Dec 2025 05:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-127599
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5d01c184-5d74-40be-b384-da1ebe94d817
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-47h64                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-127599                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-nqzjf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-127599             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-127599    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-j79cw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-127599             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-127599 event: Registered Node embed-certs-127599 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-127599 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [93d8d86ace609abb8f1ed10170642d1e7fa32aa41081656a9428120f57963fd9] <==
	{"level":"warn","ts":"2025-12-16T05:04:06.465192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.484040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.497909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.508581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.518420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.527754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.538002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.547004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.557867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.570820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.583590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.604583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.615836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.634335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.643135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.656061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.669194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.680269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.696873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.704004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.722506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.726644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.735073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.744305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:06.808890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50524","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:04:39 up 47 min,  0 user,  load average: 3.74, 2.63, 1.85
	Linux embed-certs-127599 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb9d90885dc5ae166fbf75950927e151d68c93d96387c4b6640b2d2290c6be54] <==
	I1216 05:04:16.271487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:04:16.271752       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1216 05:04:16.271963       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:04:16.271989       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:04:16.272005       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:04:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:04:16.569180       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:04:16.569992       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:04:16.570845       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:04:16.571087       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:04:16.771203       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:04:16.771291       1 metrics.go:72] Registering metrics
	I1216 05:04:16.771340       1 controller.go:711] "Syncing nftables rules"
	I1216 05:04:26.476186       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:04:26.476253       1 main.go:301] handling current node
	I1216 05:04:36.475511       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:04:36.475556       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cf5b628017995e64a14faf34db06a78effc92a449fa790e65e4fa3c0b8b16a27] <==
	I1216 05:04:07.557492       1 policy_source.go:240] refreshing policies
	I1216 05:04:07.562365       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:04:07.656580       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1216 05:04:07.657508       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:07.666190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:07.666548       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:04:07.756539       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:04:08.451441       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 05:04:08.460489       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 05:04:08.460510       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:04:09.007134       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:04:09.045779       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:04:09.168865       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 05:04:09.175158       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1216 05:04:09.176248       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:04:09.180457       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:04:09.600784       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:04:10.059632       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:04:10.069345       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 05:04:10.076830       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 05:04:15.304985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:15.308487       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:15.605411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:04:15.654127       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1216 05:04:37.949853       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:42740: use of closed network connection
	
	
	==> kube-controller-manager [ac215b9d04cf7abc45fb0308d98c0b65866d4be1030ec99a930c6c7772960312] <==
	I1216 05:04:14.601167       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 05:04:14.601507       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 05:04:14.601884       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 05:04:14.601896       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 05:04:14.601920       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 05:04:14.603947       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:04:14.605442       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 05:04:14.606597       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 05:04:14.606659       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 05:04:14.606812       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 05:04:14.606887       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 05:04:14.607755       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:04:14.608150       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:04:14.608512       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 05:04:14.608623       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:04:14.608755       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:04:14.608818       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:04:14.609066       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:04:14.609080       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:04:14.608918       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 05:04:14.612178       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 05:04:14.616376       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-127599" podCIDRs=["10.244.0.0/24"]
	I1216 05:04:14.617481       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 05:04:14.626364       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:04:29.552665       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b9f5fb6104c065811b1b6ae7bf64eb5e247837a358d2a2cbae6bdc0cdde784a3] <==
	I1216 05:04:16.081375       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:04:16.144189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:04:16.245505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:04:16.245574       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1216 05:04:16.245664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:04:16.263651       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:04:16.263705       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:04:16.268909       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:04:16.269505       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:04:16.269543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:04:16.271298       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:04:16.271329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:04:16.271357       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:04:16.271376       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:04:16.271356       1 config.go:200] "Starting service config controller"
	I1216 05:04:16.271415       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:04:16.271467       1 config.go:309] "Starting node config controller"
	I1216 05:04:16.271474       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:04:16.271480       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:04:16.371527       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:04:16.371564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:04:16.371541       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [95c998ea896ef8cbe579c6eb52c7641215ce64bd307240dfd4dfe3c88220c831] <==
	E1216 05:04:07.513702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 05:04:07.514698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 05:04:07.514745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 05:04:07.514861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 05:04:07.515735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 05:04:07.515741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 05:04:07.515796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 05:04:07.515962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 05:04:07.516094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 05:04:07.516151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 05:04:08.326101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 05:04:08.428916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 05:04:08.438397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 05:04:08.449176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 05:04:08.510130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 05:04:08.534313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 05:04:08.540427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 05:04:08.548678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 05:04:08.568868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 05:04:08.664432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 05:04:08.665574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 05:04:08.744659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 05:04:08.747716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 05:04:08.815320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1216 05:04:09.106973       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 05:04:10 embed-certs-127599 kubelet[1312]: E1216 05:04:10.942225    1312 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-127599\" already exists" pod="kube-system/kube-apiserver-embed-certs-127599"
	Dec 16 05:04:10 embed-certs-127599 kubelet[1312]: I1216 05:04:10.967950    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-127599" podStartSLOduration=0.967930168 podStartE2EDuration="967.930168ms" podCreationTimestamp="2025-12-16 05:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:10.956097051 +0000 UTC m=+1.132970693" watchObservedRunningTime="2025-12-16 05:04:10.967930168 +0000 UTC m=+1.144803801"
	Dec 16 05:04:10 embed-certs-127599 kubelet[1312]: I1216 05:04:10.979959    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-127599" podStartSLOduration=0.979939969 podStartE2EDuration="979.939969ms" podCreationTimestamp="2025-12-16 05:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:10.968305463 +0000 UTC m=+1.145179098" watchObservedRunningTime="2025-12-16 05:04:10.979939969 +0000 UTC m=+1.156813610"
	Dec 16 05:04:10 embed-certs-127599 kubelet[1312]: I1216 05:04:10.989493    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-127599" podStartSLOduration=0.989475599 podStartE2EDuration="989.475599ms" podCreationTimestamp="2025-12-16 05:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:10.980154029 +0000 UTC m=+1.157027652" watchObservedRunningTime="2025-12-16 05:04:10.989475599 +0000 UTC m=+1.166349239"
	Dec 16 05:04:10 embed-certs-127599 kubelet[1312]: I1216 05:04:10.989909    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-127599" podStartSLOduration=0.989870822 podStartE2EDuration="989.870822ms" podCreationTimestamp="2025-12-16 05:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:10.989110413 +0000 UTC m=+1.165984056" watchObservedRunningTime="2025-12-16 05:04:10.989870822 +0000 UTC m=+1.166744478"
	Dec 16 05:04:14 embed-certs-127599 kubelet[1312]: I1216 05:04:14.651913    1312 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 05:04:14 embed-certs-127599 kubelet[1312]: I1216 05:04:14.652671    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743572    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91e8a52c-f22e-4d40-871b-e6a5daeb9e5f-xtables-lock\") pod \"kube-proxy-j79cw\" (UID: \"91e8a52c-f22e-4d40-871b-e6a5daeb9e5f\") " pod="kube-system/kube-proxy-j79cw"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743623    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91e8a52c-f22e-4d40-871b-e6a5daeb9e5f-lib-modules\") pod \"kube-proxy-j79cw\" (UID: \"91e8a52c-f22e-4d40-871b-e6a5daeb9e5f\") " pod="kube-system/kube-proxy-j79cw"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743647    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/740c8d9d-3a80-45a8-8916-67d54515815e-cni-cfg\") pod \"kindnet-nqzjf\" (UID: \"740c8d9d-3a80-45a8-8916-67d54515815e\") " pod="kube-system/kindnet-nqzjf"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743729    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/740c8d9d-3a80-45a8-8916-67d54515815e-xtables-lock\") pod \"kindnet-nqzjf\" (UID: \"740c8d9d-3a80-45a8-8916-67d54515815e\") " pod="kube-system/kindnet-nqzjf"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743780    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91e8a52c-f22e-4d40-871b-e6a5daeb9e5f-kube-proxy\") pod \"kube-proxy-j79cw\" (UID: \"91e8a52c-f22e-4d40-871b-e6a5daeb9e5f\") " pod="kube-system/kube-proxy-j79cw"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743807    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bv76\" (UniqueName: \"kubernetes.io/projected/91e8a52c-f22e-4d40-871b-e6a5daeb9e5f-kube-api-access-9bv76\") pod \"kube-proxy-j79cw\" (UID: \"91e8a52c-f22e-4d40-871b-e6a5daeb9e5f\") " pod="kube-system/kube-proxy-j79cw"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743862    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpg66\" (UniqueName: \"kubernetes.io/projected/740c8d9d-3a80-45a8-8916-67d54515815e-kube-api-access-vpg66\") pod \"kindnet-nqzjf\" (UID: \"740c8d9d-3a80-45a8-8916-67d54515815e\") " pod="kube-system/kindnet-nqzjf"
	Dec 16 05:04:15 embed-certs-127599 kubelet[1312]: I1216 05:04:15.743948    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/740c8d9d-3a80-45a8-8916-67d54515815e-lib-modules\") pod \"kindnet-nqzjf\" (UID: \"740c8d9d-3a80-45a8-8916-67d54515815e\") " pod="kube-system/kindnet-nqzjf"
	Dec 16 05:04:16 embed-certs-127599 kubelet[1312]: I1216 05:04:16.969170    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j79cw" podStartSLOduration=1.969149877 podStartE2EDuration="1.969149877s" podCreationTimestamp="2025-12-16 05:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:16.959831214 +0000 UTC m=+7.136704855" watchObservedRunningTime="2025-12-16 05:04:16.969149877 +0000 UTC m=+7.146023514"
	Dec 16 05:04:16 embed-certs-127599 kubelet[1312]: I1216 05:04:16.969808    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nqzjf" podStartSLOduration=1.969791285 podStartE2EDuration="1.969791285s" podCreationTimestamp="2025-12-16 05:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:16.96913706 +0000 UTC m=+7.146010702" watchObservedRunningTime="2025-12-16 05:04:16.969791285 +0000 UTC m=+7.146664927"
	Dec 16 05:04:27 embed-certs-127599 kubelet[1312]: I1216 05:04:27.049149    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 05:04:27 embed-certs-127599 kubelet[1312]: I1216 05:04:27.134379    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkrk\" (UniqueName: \"kubernetes.io/projected/ae7b55f0-073f-4442-bc1a-e168246b4bd6-kube-api-access-7vkrk\") pod \"storage-provisioner\" (UID: \"ae7b55f0-073f-4442-bc1a-e168246b4bd6\") " pod="kube-system/storage-provisioner"
	Dec 16 05:04:27 embed-certs-127599 kubelet[1312]: I1216 05:04:27.134432    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe03346d-aa19-42e5-9352-7aed24d60503-config-volume\") pod \"coredns-66bc5c9577-47h64\" (UID: \"fe03346d-aa19-42e5-9352-7aed24d60503\") " pod="kube-system/coredns-66bc5c9577-47h64"
	Dec 16 05:04:27 embed-certs-127599 kubelet[1312]: I1216 05:04:27.134464    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ae7b55f0-073f-4442-bc1a-e168246b4bd6-tmp\") pod \"storage-provisioner\" (UID: \"ae7b55f0-073f-4442-bc1a-e168246b4bd6\") " pod="kube-system/storage-provisioner"
	Dec 16 05:04:27 embed-certs-127599 kubelet[1312]: I1216 05:04:27.134488    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qk8n\" (UniqueName: \"kubernetes.io/projected/fe03346d-aa19-42e5-9352-7aed24d60503-kube-api-access-8qk8n\") pod \"coredns-66bc5c9577-47h64\" (UID: \"fe03346d-aa19-42e5-9352-7aed24d60503\") " pod="kube-system/coredns-66bc5c9577-47h64"
	Dec 16 05:04:27 embed-certs-127599 kubelet[1312]: I1216 05:04:27.985491    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-47h64" podStartSLOduration=12.985467798 podStartE2EDuration="12.985467798s" podCreationTimestamp="2025-12-16 05:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:27.985299983 +0000 UTC m=+18.162173624" watchObservedRunningTime="2025-12-16 05:04:27.985467798 +0000 UTC m=+18.162341438"
	Dec 16 05:04:27 embed-certs-127599 kubelet[1312]: I1216 05:04:27.994316    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.994293526 podStartE2EDuration="11.994293526s" podCreationTimestamp="2025-12-16 05:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:27.99374938 +0000 UTC m=+18.170623021" watchObservedRunningTime="2025-12-16 05:04:27.994293526 +0000 UTC m=+18.171167166"
	Dec 16 05:04:29 embed-certs-127599 kubelet[1312]: I1216 05:04:29.951201    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29ml\" (UniqueName: \"kubernetes.io/projected/253698b2-29a2-40df-963b-00ac87e3af9f-kube-api-access-f29ml\") pod \"busybox\" (UID: \"253698b2-29a2-40df-963b-00ac87e3af9f\") " pod="default/busybox"
	
	
	==> storage-provisioner [4ab263ddcb561be5b83b2476b401c12e7230c5a90293dd850c1a8495be96e01f] <==
	I1216 05:04:27.440787       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:04:27.451719       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:04:27.451778       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:04:27.454037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:27.458912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:04:27.459122       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:04:27.459243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"063e8c72-eccb-4343-aa0d-2dc37c13d7d3", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-127599_8c328455-b4b8-49f2-89f2-f53f808827e7 became leader
	I1216 05:04:27.459284       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-127599_8c328455-b4b8-49f2-89f2-f53f808827e7!
	W1216 05:04:27.461840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:27.466354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:04:27.560034       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-127599_8c328455-b4b8-49f2-89f2-f53f808827e7!
	W1216 05:04:29.469294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:29.472977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:31.475621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:31.480172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:33.483227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:33.487345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:35.492653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:35.496727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:37.499913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:37.504305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:39.508744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:39.514556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-127599 -n embed-certs-127599
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-127599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (325.265726ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-375252 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-375252 describe deploy/metrics-server -n kube-system: exit status 1 (86.740974ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-375252 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-375252
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-375252:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4",
	        "Created": "2025-12-16T05:04:21.317560531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278072,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:04:21.352510788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/hosts",
	        "LogPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4-json.log",
	        "Name": "/default-k8s-diff-port-375252",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-375252:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-375252",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4",
	                "LowerDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-375252",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-375252/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-375252",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-375252",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-375252",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3a494e4e19fe333ec103bf13dd0e50f35b204a893f0b63d2dc42b2bc12432b2a",
	            "SandboxKey": "/var/run/docker/netns/3a494e4e19fe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-375252": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b4cd001204df0b61e89eebe6a12657aa97766dbbe6ee158bf1c23f85315b1b70",
	                    "EndpointID": "16b1f7c245837606a33967a039af667013ebf77a6a080bb5130f79bc90f7df69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "7a:c8:a2:2c:32:06",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-375252",
	                        "f7855ebd974c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-375252 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-375252 logs -n 25: (1.424915826s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cilium-548714                                                                                                                                                                                                                              │ cilium-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:02 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p stopped-upgrade-572334                                                                                                                                                                                                                     │ stopped-upgrade-572334       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p cert-expiration-123545                                                                                                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	│ stop    │ -p old-k8s-version-545075 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                  │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                               │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:04:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:04:58.807546  285878 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:04:58.807646  285878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:58.807658  285878 out.go:374] Setting ErrFile to fd 2...
	I1216 05:04:58.807663  285878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:58.807870  285878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:04:58.808307  285878 out.go:368] Setting JSON to false
	I1216 05:04:58.809533  285878 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2850,"bootTime":1765858649,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:04:58.809586  285878 start.go:143] virtualization: kvm guest
	I1216 05:04:58.811715  285878 out.go:179] * [embed-certs-127599] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:04:58.813202  285878 notify.go:221] Checking for updates...
	I1216 05:04:58.813273  285878 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:04:58.814671  285878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:04:58.816055  285878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:58.817240  285878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:04:58.818360  285878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:04:58.819590  285878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:04:58.821480  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:04:58.822299  285878 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:04:58.848069  285878 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:04:58.848202  285878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:58.906263  285878 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:04:58.896349391 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:58.906399  285878 docker.go:319] overlay module found
	I1216 05:04:58.908347  285878 out.go:179] * Using the docker driver based on existing profile
	I1216 05:04:58.909504  285878 start.go:309] selected driver: docker
	I1216 05:04:58.909519  285878 start.go:927] validating driver "docker" against &{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:58.909604  285878 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:04:58.910114  285878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:58.971198  285878 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:04:58.959512916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:58.971557  285878 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:58.971608  285878 cni.go:84] Creating CNI manager for ""
	I1216 05:04:58.971676  285878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:58.971782  285878 start.go:353] cluster config:
	{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:58.974821  285878 out.go:179] * Starting "embed-certs-127599" primary control-plane node in "embed-certs-127599" cluster
	I1216 05:04:58.976420  285878 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:04:58.978154  285878 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:04:58.980043  285878 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:58.980116  285878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:04:58.980120  285878 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:04:58.980154  285878 cache.go:65] Caching tarball of preloaded images
	I1216 05:04:58.980280  285878 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:04:58.980291  285878 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:04:58.980419  285878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json ...
	I1216 05:04:59.006359  285878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:04:59.006379  285878 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:04:59.006397  285878 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:04:59.006434  285878 start.go:360] acquireMachinesLock for embed-certs-127599: {Name:mk782536a8f26bd7063e28ead9562fcba9d13ffe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:04:59.006494  285878 start.go:364] duration metric: took 39.093µs to acquireMachinesLock for "embed-certs-127599"
	I1216 05:04:59.006518  285878 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:04:59.006527  285878 fix.go:54] fixHost starting: 
	I1216 05:04:59.006819  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:04:59.030521  285878 fix.go:112] recreateIfNeeded on embed-certs-127599: state=Stopped err=<nil>
	W1216 05:04:59.030575  285878 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:04:58.146615  284202 cli_runner.go:164] Run: docker network inspect no-preload-990631 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:58.164702  284202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 05:04:58.168903  284202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:58.179164  284202 kubeadm.go:884] updating cluster {Name:no-preload-990631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:04:58.179295  284202 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:04:58.179337  284202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:58.211615  284202 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:58.211639  284202 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:04:58.211647  284202 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:04:58.211734  284202 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-990631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:04:58.211797  284202 ssh_runner.go:195] Run: crio config
	I1216 05:04:58.256766  284202 cni.go:84] Creating CNI manager for ""
	I1216 05:04:58.256789  284202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:58.256801  284202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:04:58.256822  284202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-990631 NodeName:no-preload-990631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:04:58.256971  284202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-990631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:04:58.257052  284202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:04:58.265699  284202 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:04:58.265763  284202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:04:58.273700  284202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1216 05:04:58.286894  284202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:04:58.300035  284202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1216 05:04:58.313614  284202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:04:58.317210  284202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:58.327214  284202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:58.414343  284202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:58.443043  284202 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631 for IP: 192.168.103.2
	I1216 05:04:58.443061  284202 certs.go:195] generating shared ca certs ...
	I1216 05:04:58.443081  284202 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:58.443293  284202 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:04:58.443372  284202 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:04:58.443391  284202 certs.go:257] generating profile certs ...
	I1216 05:04:58.443497  284202 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.key
	I1216 05:04:58.443575  284202 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key.44083b8e
	I1216 05:04:58.443635  284202 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key
	I1216 05:04:58.443842  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:04:58.443905  284202 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:04:58.443919  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:04:58.443943  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:04:58.443970  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:04:58.443999  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:04:58.444082  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:58.444854  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:04:58.475449  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:04:58.495391  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:04:58.517175  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:04:58.549655  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:04:58.572549  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:04:58.596524  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:04:58.618088  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:04:58.640733  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:04:58.663625  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:04:58.682127  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:04:58.700365  284202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:04:58.715543  284202 ssh_runner.go:195] Run: openssl version
	I1216 05:04:58.722348  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.729959  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:04:58.738355  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.742405  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.742457  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.787292  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:04:58.796645  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.804122  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:04:58.812151  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.816028  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.816079  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.855153  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:04:58.863425  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.873487  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:04:58.882811  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.887176  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.887238  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.926865  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:58.937699  284202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:04:58.942829  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:04:58.990450  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:04:59.040055  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:04:59.099401  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:04:59.157276  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:04:59.215120  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:04:59.258742  284202 kubeadm.go:401] StartCluster: {Name:no-preload-990631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:59.258841  284202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:04:59.258895  284202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:04:59.292160  284202 cri.go:89] found id: "4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5"
	I1216 05:04:59.292185  284202 cri.go:89] found id: "c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637"
	I1216 05:04:59.292192  284202 cri.go:89] found id: "3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5"
	I1216 05:04:59.292196  284202 cri.go:89] found id: "f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9"
	I1216 05:04:59.292201  284202 cri.go:89] found id: ""
	I1216 05:04:59.292247  284202 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:04:59.307274  284202 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:04:59Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:04:59.307344  284202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:04:59.316697  284202 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:04:59.316715  284202 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:04:59.316758  284202 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:04:59.327769  284202 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:04:59.328615  284202 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-990631" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:59.329102  284202 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-990631" cluster setting kubeconfig missing "no-preload-990631" context setting]
	I1216 05:04:59.329859  284202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.331782  284202 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:04:59.341505  284202 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1216 05:04:59.341536  284202 kubeadm.go:602] duration metric: took 24.814841ms to restartPrimaryControlPlane
	I1216 05:04:59.341546  284202 kubeadm.go:403] duration metric: took 82.81371ms to StartCluster
	I1216 05:04:59.341562  284202 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.341627  284202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:59.343176  284202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.343399  284202 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:04:59.343621  284202 config.go:182] Loaded profile config "no-preload-990631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:04:59.343681  284202 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:04:59.343779  284202 addons.go:70] Setting storage-provisioner=true in profile "no-preload-990631"
	I1216 05:04:59.343815  284202 addons.go:239] Setting addon storage-provisioner=true in "no-preload-990631"
	W1216 05:04:59.343827  284202 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:04:59.343872  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.343982  284202 addons.go:70] Setting dashboard=true in profile "no-preload-990631"
	I1216 05:04:59.344000  284202 addons.go:70] Setting default-storageclass=true in profile "no-preload-990631"
	I1216 05:04:59.344031  284202 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-990631"
	I1216 05:04:59.344014  284202 addons.go:239] Setting addon dashboard=true in "no-preload-990631"
	W1216 05:04:59.344156  284202 addons.go:248] addon dashboard should already be in state true
	I1216 05:04:59.344188  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.344358  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.344495  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.344647  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.347229  284202 out.go:179] * Verifying Kubernetes components...
	I1216 05:04:59.349480  284202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:59.373084  284202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:04:59.373093  284202 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:04:59.373127  284202 addons.go:239] Setting addon default-storageclass=true in "no-preload-990631"
	W1216 05:04:59.373249  284202 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:04:59.373317  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.373809  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.374487  284202 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:59.374504  284202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:04:59.374557  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.376102  284202 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:04:59.377081  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:04:59.377099  284202 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:04:59.377145  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.399402  284202 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:59.399426  284202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:04:59.399475  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.404857  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.417430  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.431357  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.515049  284202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:59.526392  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:59.531687  284202 node_ready.go:35] waiting up to 6m0s for node "no-preload-990631" to be "Ready" ...
	I1216 05:04:59.533132  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:04:59.533157  284202 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:04:59.538389  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:59.549617  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:04:59.549639  284202 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:04:59.569671  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:04:59.569760  284202 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:04:59.593378  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:04:59.593405  284202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:04:59.621948  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:04:59.621972  284202 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:04:59.644345  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:04:59.644448  284202 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:04:59.661945  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:04:59.661975  284202 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:04:59.678222  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:04:59.678252  284202 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:04:59.700122  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:04:59.700148  284202 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:04:59.719777  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:00.484236  284202 node_ready.go:49] node "no-preload-990631" is "Ready"
	I1216 05:05:00.484283  284202 node_ready.go:38] duration metric: took 952.560787ms for node "no-preload-990631" to be "Ready" ...
	I1216 05:05:00.484302  284202 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:00.484360  284202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:01.061883  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.535448575s)
	I1216 05:05:01.061947  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.523532435s)
	I1216 05:05:01.062100  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.342232728s)
	I1216 05:05:01.062128  284202 api_server.go:72] duration metric: took 1.718698599s to wait for apiserver process to appear ...
	I1216 05:05:01.062138  284202 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:01.062165  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:01.064061  284202 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-990631 addons enable metrics-server
	
	I1216 05:05:01.067391  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:01.067413  284202 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:01.069255  284202 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 05:05:01.070615  284202 addons.go:530] duration metric: took 1.726932128s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:05:01.563168  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:01.567835  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:01.567870  284202 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:02.062462  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:02.066872  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 05:05:02.068217  284202 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:05:02.068239  284202 api_server.go:131] duration metric: took 1.006094275s to wait for apiserver health ...
	I1216 05:05:02.068247  284202 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:02.072657  284202 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:02.072700  284202 system_pods.go:61] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:02.072712  284202 system_pods.go:61] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:02.072720  284202 system_pods.go:61] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:05:02.072735  284202 system_pods.go:61] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:02.072793  284202 system_pods.go:61] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:02.072801  284202 system_pods.go:61] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:05:02.072815  284202 system_pods.go:61] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:02.072820  284202 system_pods.go:61] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Running
	I1216 05:05:02.072827  284202 system_pods.go:74] duration metric: took 4.575509ms to wait for pod list to return data ...
	I1216 05:05:02.072835  284202 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:02.075526  284202 default_sa.go:45] found service account: "default"
	I1216 05:05:02.075545  284202 default_sa.go:55] duration metric: took 2.704707ms for default service account to be created ...
	I1216 05:05:02.075552  284202 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:02.078045  284202 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:02.078069  284202 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:02.078076  284202 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:02.078081  284202 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:05:02.078087  284202 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:02.078098  284202 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:02.078105  284202 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:05:02.078113  284202 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:02.078120  284202 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Running
	I1216 05:05:02.078129  284202 system_pods.go:126] duration metric: took 2.571444ms to wait for k8s-apps to be running ...
	I1216 05:05:02.078140  284202 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:02.078198  284202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:02.090955  284202 system_svc.go:56] duration metric: took 12.80557ms WaitForService to wait for kubelet
	I1216 05:05:02.090982  284202 kubeadm.go:587] duration metric: took 2.74755423s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:02.090999  284202 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:02.093755  284202 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:02.093776  284202 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:02.093791  284202 node_conditions.go:105] duration metric: took 2.787822ms to run NodePressure ...
	I1216 05:05:02.093801  284202 start.go:242] waiting for startup goroutines ...
	I1216 05:05:02.093808  284202 start.go:247] waiting for cluster config update ...
	I1216 05:05:02.093821  284202 start.go:256] writing updated cluster config ...
	I1216 05:05:02.094117  284202 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:02.097851  284202 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:02.100920  284202 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:00.516383  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	I1216 05:05:03.014998  275199 pod_ready.go:94] pod "coredns-5dd5756b68-fplzg" is "Ready"
	I1216 05:05:03.015051  275199 pod_ready.go:86] duration metric: took 36.505880263s for pod "coredns-5dd5756b68-fplzg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.018010  275199 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.022411  275199 pod_ready.go:94] pod "etcd-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.022432  275199 pod_ready.go:86] duration metric: took 4.39181ms for pod "etcd-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.025267  275199 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.029715  275199 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.029741  275199 pod_ready.go:86] duration metric: took 4.454401ms for pod "kube-apiserver-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.032803  275199 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.212694  275199 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.212718  275199 pod_ready.go:86] duration metric: took 179.891662ms for pod "kube-controller-manager-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:59.034761  285878 out.go:252] * Restarting existing docker container for "embed-certs-127599" ...
	I1216 05:04:59.034846  285878 cli_runner.go:164] Run: docker start embed-certs-127599
	I1216 05:04:59.342519  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:04:59.369236  285878 kic.go:430] container "embed-certs-127599" state is running.
	I1216 05:04:59.369666  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:04:59.398479  285878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json ...
	I1216 05:04:59.398806  285878 machine.go:94] provisionDockerMachine start ...
	I1216 05:04:59.398910  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:04:59.434387  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:59.434715  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:04:59.434744  285878 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:04:59.435373  285878 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57022->127.0.0.1:33091: read: connection reset by peer
	I1216 05:05:02.565577  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-127599
	
	I1216 05:05:02.565604  285878 ubuntu.go:182] provisioning hostname "embed-certs-127599"
	I1216 05:05:02.565682  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.585360  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:02.585581  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:02.585595  285878 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-127599 && echo "embed-certs-127599" | sudo tee /etc/hostname
	I1216 05:05:02.721382  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-127599
	
	I1216 05:05:02.721466  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.739517  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:02.739721  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:02.739737  285878 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-127599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-127599/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-127599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:02.866626  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:02.866657  285878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:02.866676  285878 ubuntu.go:190] setting up certificates
	I1216 05:05:02.866687  285878 provision.go:84] configureAuth start
	I1216 05:05:02.866760  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:05:02.884961  285878 provision.go:143] copyHostCerts
	I1216 05:05:02.885047  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:02.885066  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:02.885138  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:02.885238  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:02.885250  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:02.885279  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:02.885346  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:02.885354  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:02.885376  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:02.885439  285878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.embed-certs-127599 san=[127.0.0.1 192.168.94.2 embed-certs-127599 localhost minikube]
	I1216 05:05:02.905981  285878 provision.go:177] copyRemoteCerts
	I1216 05:05:02.906048  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:02.906098  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.925287  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.020273  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:03.041871  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:05:03.060678  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:05:03.078064  285878 provision.go:87] duration metric: took 211.357317ms to configureAuth
	I1216 05:05:03.078092  285878 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:03.078242  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:03.078323  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.096032  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:03.096295  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:03.096316  285878 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:03.415229  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:03.415249  285878 machine.go:97] duration metric: took 4.016422304s to provisionDockerMachine
	I1216 05:05:03.415264  285878 start.go:293] postStartSetup for "embed-certs-127599" (driver="docker")
	I1216 05:05:03.415277  285878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:03.415379  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:03.415436  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.434727  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.526138  285878 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:03.529645  285878 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:03.529676  285878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:03.529688  285878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:03.529745  285878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:03.529814  285878 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:03.529894  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:03.537519  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:03.555384  285878 start.go:296] duration metric: took 140.103941ms for postStartSetup
	I1216 05:05:03.555454  285878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:03.555498  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.574541  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.671209  285878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:03.675704  285878 fix.go:56] duration metric: took 4.669173594s for fixHost
	I1216 05:05:03.675732  285878 start.go:83] releasing machines lock for "embed-certs-127599", held for 4.669223523s
	I1216 05:05:03.675807  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:05:03.693006  285878 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:03.693089  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.693130  285878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:03.693193  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.711276  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.715049  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.415146  275199 pod_ready.go:83] waiting for pod "kube-proxy-7pldc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.813274  275199 pod_ready.go:94] pod "kube-proxy-7pldc" is "Ready"
	I1216 05:05:03.813309  275199 pod_ready.go:86] duration metric: took 398.130225ms for pod "kube-proxy-7pldc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.016765  275199 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.412999  275199 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-545075" is "Ready"
	I1216 05:05:04.413055  275199 pod_ready.go:86] duration metric: took 396.261871ms for pod "kube-scheduler-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.413072  275199 pod_ready.go:40] duration metric: took 37.908213005s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:04.460171  275199 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1216 05:05:04.461837  275199 out.go:203] 
	W1216 05:05:04.462951  275199 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1216 05:05:04.464050  275199 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1216 05:05:04.465161  275199 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-545075" cluster and "default" namespace by default
	I1216 05:05:03.870367  285878 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:03.877245  285878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:03.911801  285878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:03.916421  285878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:03.916499  285878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:03.924729  285878 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:05:03.924754  285878 start.go:496] detecting cgroup driver to use...
	I1216 05:05:03.924789  285878 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:03.924826  285878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:03.938355  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:03.950687  285878 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:03.950755  285878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:03.966322  285878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:03.978654  285878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:04.067259  285878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:04.157828  285878 docker.go:234] disabling docker service ...
	I1216 05:05:04.157924  285878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:04.172457  285878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:04.185242  285878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:04.268722  285878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:04.352800  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:04.369946  285878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:04.389991  285878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:04.390111  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.402275  285878 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:04.402346  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.413804  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.423233  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.433098  285878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:04.441987  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.452342  285878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.461567  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.470671  285878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:04.478121  285878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:04.485428  285878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:04.584484  285878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:04.714916  285878 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:04.715042  285878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:04.719134  285878 start.go:564] Will wait 60s for crictl version
	I1216 05:05:04.719175  285878 ssh_runner.go:195] Run: which crictl
	I1216 05:05:04.722694  285878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:04.750409  285878 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:04.750485  285878 ssh_runner.go:195] Run: crio --version
	I1216 05:05:04.786143  285878 ssh_runner.go:195] Run: crio --version
	I1216 05:05:04.822457  285878 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 16 05:04:54 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:54.963381328Z" level=info msg="Started container" PID=1877 containerID=285d126ff1ce4aa196dcdd4177f252143a4ea859b6315aa125c516a24e07d5cd description=kube-system/storage-provisioner/storage-provisioner id=ca17f59e-3550-41c7-935e-2fc3da0ff29c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e87519198e6e2f8d30377e45ea776fcf6f336e2a28b03f3e052ff5a2479215ab
	Dec 16 05:04:54 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:54.96359131Z" level=info msg="Started container" PID=1878 containerID=bb0012d0ac76aa15466416c603e7484cd53d1b3e57f5def9ffbcbc542d8a0bcc description=kube-system/coredns-66bc5c9577-65x5z/coredns id=a893b9ae-ed04-44fe-9982-8aa0121c78b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50d5054c500893a29585b0152fdc4f81704fc861f67ae16505c78cb120914469
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.35899002Z" level=info msg="Running pod sandbox: default/busybox/POD" id=db730fbb-fd1a-455b-b8f8-cc07996ab20a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.359085115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.363667971Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:851b707fdcffab7a95aa6564921614f712352bca38da4de0ca3fa911c6a30a97 UID:c17e2533-1c59-43f6-b58d-8825c7f2d554 NetNS:/var/run/netns/778d4728-c521-48e8-aeb0-71eef03ff066 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ae18}] Aliases:map[]}"
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.363708281Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.373487178Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:851b707fdcffab7a95aa6564921614f712352bca38da4de0ca3fa911c6a30a97 UID:c17e2533-1c59-43f6-b58d-8825c7f2d554 NetNS:/var/run/netns/778d4728-c521-48e8-aeb0-71eef03ff066 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ae18}] Aliases:map[]}"
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.373656312Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.374675857Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.375869899Z" level=info msg="Ran pod sandbox 851b707fdcffab7a95aa6564921614f712352bca38da4de0ca3fa911c6a30a97 with infra container: default/busybox/POD" id=db730fbb-fd1a-455b-b8f8-cc07996ab20a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.37713843Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3584def5-7d10-4c39-ac31-d3d80a8c1898 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.377249007Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3584def5-7d10-4c39-ac31-d3d80a8c1898 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.377284632Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3584def5-7d10-4c39-ac31-d3d80a8c1898 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.378083209Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=60ab0dc1-6594-4233-8592-17715ea35404 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:04:57 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:57.382830774Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.705255177Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=60ab0dc1-6594-4233-8592-17715ea35404 name=/runtime.v1.ImageService/PullImage
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.706057382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=373318f6-81e9-4d59-bc40-22c8fa7b8623 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.707471557Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a33e3168-8334-479b-9c44-5002ca7c0556 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.711141052Z" level=info msg="Creating container: default/busybox/busybox" id=80728bd4-fee1-4800-93be-4c16882f807f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.711258466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.717062665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.717456053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.743523486Z" level=info msg="Created container ceb3ec34dcd1a195ba24429984a595fdf0b2bd39ecbc261b52db20acf933211b: default/busybox/busybox" id=80728bd4-fee1-4800-93be-4c16882f807f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.744164951Z" level=info msg="Starting container: ceb3ec34dcd1a195ba24429984a595fdf0b2bd39ecbc261b52db20acf933211b" id=52cc2607-ceef-4cb1-b959-f74bc250e0ed name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:04:58 default-k8s-diff-port-375252 crio[780]: time="2025-12-16T05:04:58.746098388Z" level=info msg="Started container" PID=1954 containerID=ceb3ec34dcd1a195ba24429984a595fdf0b2bd39ecbc261b52db20acf933211b description=default/busybox/busybox id=52cc2607-ceef-4cb1-b959-f74bc250e0ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=851b707fdcffab7a95aa6564921614f712352bca38da4de0ca3fa911c6a30a97
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	ceb3ec34dcd1a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   851b707fdcffa       busybox                                                default
	bb0012d0ac76a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   50d5054c50089       coredns-66bc5c9577-65x5z                               kube-system
	285d126ff1ce4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   e87519198e6e2       storage-provisioner                                    kube-system
	85ef31f31d700       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   2a1daf0d2d07a       kube-proxy-dmw9t                                       kube-system
	b6994d297f0a2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   067a71db9224f       kindnet-xml94                                          kube-system
	d6c4491b6e29f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   18a59fb364685       etcd-default-k8s-diff-port-375252                      kube-system
	929aa7b85e50c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   4a84b11788d57       kube-controller-manager-default-k8s-diff-port-375252   kube-system
	a8598b291bb07       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   be8f3422f083f       kube-apiserver-default-k8s-diff-port-375252            kube-system
	9b5d24cfeab3f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   6ddce2e42cf7c       kube-scheduler-default-k8s-diff-port-375252            kube-system
	
	
	==> coredns [bb0012d0ac76aa15466416c603e7484cd53d1b3e57f5def9ffbcbc542d8a0bcc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39158 - 13011 "HINFO IN 3757698617574400764.1487524272995632309. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034332691s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-375252
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-375252
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=default-k8s-diff-port-375252
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:04:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-375252
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:04:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:04:54 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:04:54 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:04:54 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:04:54 +0000   Tue, 16 Dec 2025 05:04:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-375252
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                226f3bd9-396d-4746-a6f5-231ce56a4f47
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-65x5z                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-375252                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-xml94                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-375252             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-375252    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-dmw9t                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-375252             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-375252 event: Registered Node default-k8s-diff-port-375252 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-375252 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [d6c4491b6e29f8a57d99589fa93cad63c08abe47a1601007601a243b2ee5f87e] <==
	{"level":"warn","ts":"2025-12-16T05:04:34.743283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.750910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.760785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.775195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.782338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.790168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.797178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.804356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.813036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.820325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.827961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.835922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.843612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.850400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.858857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.867007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.878083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.885325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.893175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.900154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.907902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.925469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.932130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.938430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:34.984048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48640","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:05:07 up 47 min,  0 user,  load average: 4.11, 2.79, 1.93
	Linux default-k8s-diff-port-375252 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b6994d297f0a27efd57a427dd39cded6feac2180a8fe53526c9b49a56106e986] <==
	I1216 05:04:44.238076       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:04:44.238345       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 05:04:44.238491       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:04:44.238510       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:04:44.238537       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:04:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:04:44.440841       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:04:44.440904       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:04:44.440918       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:04:44.441418       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:04:44.841900       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:04:44.841924       1 metrics.go:72] Registering metrics
	I1216 05:04:44.841964       1 controller.go:711] "Syncing nftables rules"
	I1216 05:04:54.444611       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:04:54.444670       1 main.go:301] handling current node
	I1216 05:05:04.444457       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:05:04.444507       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a8598b291bb07cc4dcb19d536fd43a937ac1ad264ffad863a3813f8eeecd9727] <==
	E1216 05:04:35.544514       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1216 05:04:35.554850       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:04:35.558340       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1216 05:04:35.558843       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:35.563953       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:04:35.564114       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:35.749047       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:04:36.357120       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 05:04:36.360824       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 05:04:36.360838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:04:36.812349       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:04:36.846649       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:04:36.962097       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 05:04:36.969582       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1216 05:04:36.970624       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:04:36.974670       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:04:37.385142       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:04:38.136897       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:04:38.147617       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 05:04:38.154626       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 05:04:42.588686       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:04:43.040393       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:43.044487       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:04:43.540374       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1216 05:05:05.176317       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:48154: use of closed network connection
	
	
	==> kube-controller-manager [929aa7b85e50cb4beeed4cd06d10d2ee5406e6674aafc67f37643d141e2f3832] <==
	I1216 05:04:42.384590       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 05:04:42.384606       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1216 05:04:42.384798       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1216 05:04:42.384848       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 05:04:42.385128       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 05:04:42.385644       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 05:04:42.386234       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 05:04:42.386239       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 05:04:42.387379       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 05:04:42.387408       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 05:04:42.387458       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 05:04:42.388617       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 05:04:42.388658       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 05:04:42.391091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:04:42.394272       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:04:42.395451       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 05:04:42.400908       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 05:04:42.405395       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 05:04:42.411738       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 05:04:42.423348       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:04:42.434561       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:04:42.434581       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 05:04:42.434590       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1216 05:04:43.767752       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1216 05:04:57.387124       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [85ef31f31d70019837dc0b2a1c199b7ca576da7d264e67d119a1dccfd74d94fd] <==
	I1216 05:04:44.125344       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:04:44.192127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:04:44.292451       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:04:44.292489       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 05:04:44.292575       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:04:44.310912       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:04:44.310966       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:04:44.315905       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:04:44.316354       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:04:44.316386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:04:44.317808       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:04:44.317844       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:04:44.317847       1 config.go:200] "Starting service config controller"
	I1216 05:04:44.317871       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:04:44.317881       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:04:44.317888       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:04:44.317932       1 config.go:309] "Starting node config controller"
	I1216 05:04:44.317940       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:04:44.317952       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:04:44.418062       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:04:44.418157       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:04:44.418152       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9b5d24cfeab3fe4418811966351ede28d226d10b8671c41ea2cac909b0737959] <==
	E1216 05:04:35.408367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 05:04:35.408593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 05:04:35.408850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 05:04:35.408929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 05:04:35.408959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 05:04:35.409000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 05:04:35.409014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 05:04:35.409009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 05:04:35.409069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 05:04:35.409098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 05:04:35.409116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 05:04:35.409185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 05:04:35.409213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 05:04:35.409268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 05:04:36.243671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 05:04:36.260794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 05:04:36.288887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 05:04:36.345394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 05:04:36.384400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 05:04:36.437052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 05:04:36.438926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 05:04:36.528614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 05:04:36.563781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 05:04:36.659353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1216 05:04:39.006098       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 05:04:39 default-k8s-diff-port-375252 kubelet[1363]: E1216 05:04:39.022053    1363 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-375252\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-375252"
	Dec 16 05:04:39 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:39.049723    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-375252" podStartSLOduration=1.0496998849999999 podStartE2EDuration="1.049699885s" podCreationTimestamp="2025-12-16 05:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:39.0378795 +0000 UTC m=+1.152244900" watchObservedRunningTime="2025-12-16 05:04:39.049699885 +0000 UTC m=+1.164065285"
	Dec 16 05:04:39 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:39.049891    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-375252" podStartSLOduration=3.049877532 podStartE2EDuration="3.049877532s" podCreationTimestamp="2025-12-16 05:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:39.049578292 +0000 UTC m=+1.163943692" watchObservedRunningTime="2025-12-16 05:04:39.049877532 +0000 UTC m=+1.164242934"
	Dec 16 05:04:39 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:39.072096    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-375252" podStartSLOduration=1.072070521 podStartE2EDuration="1.072070521s" podCreationTimestamp="2025-12-16 05:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:39.060895584 +0000 UTC m=+1.175260984" watchObservedRunningTime="2025-12-16 05:04:39.072070521 +0000 UTC m=+1.186435923"
	Dec 16 05:04:39 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:39.083219    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-375252" podStartSLOduration=1.083195534 podStartE2EDuration="1.083195534s" podCreationTimestamp="2025-12-16 05:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:39.072864846 +0000 UTC m=+1.187230246" watchObservedRunningTime="2025-12-16 05:04:39.083195534 +0000 UTC m=+1.197560929"
	Dec 16 05:04:42 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:42.421451    1363 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 05:04:42 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:42.422371    1363 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.804687    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c72d3d28-eaa9-4a35-b7b9-5207edf40372-kube-proxy\") pod \"kube-proxy-dmw9t\" (UID: \"c72d3d28-eaa9-4a35-b7b9-5207edf40372\") " pod="kube-system/kube-proxy-dmw9t"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.804751    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f0ff435-e0ff-441e-bdf8-0cf9a6da784c-cni-cfg\") pod \"kindnet-xml94\" (UID: \"2f0ff435-e0ff-441e-bdf8-0cf9a6da784c\") " pod="kube-system/kindnet-xml94"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.804773    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f0ff435-e0ff-441e-bdf8-0cf9a6da784c-xtables-lock\") pod \"kindnet-xml94\" (UID: \"2f0ff435-e0ff-441e-bdf8-0cf9a6da784c\") " pod="kube-system/kindnet-xml94"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.804796    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jh7l\" (UniqueName: \"kubernetes.io/projected/2f0ff435-e0ff-441e-bdf8-0cf9a6da784c-kube-api-access-2jh7l\") pod \"kindnet-xml94\" (UID: \"2f0ff435-e0ff-441e-bdf8-0cf9a6da784c\") " pod="kube-system/kindnet-xml94"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.804908    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c72d3d28-eaa9-4a35-b7b9-5207edf40372-lib-modules\") pod \"kube-proxy-dmw9t\" (UID: \"c72d3d28-eaa9-4a35-b7b9-5207edf40372\") " pod="kube-system/kube-proxy-dmw9t"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.804960    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f0ff435-e0ff-441e-bdf8-0cf9a6da784c-lib-modules\") pod \"kindnet-xml94\" (UID: \"2f0ff435-e0ff-441e-bdf8-0cf9a6da784c\") " pod="kube-system/kindnet-xml94"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.804988    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmj69\" (UniqueName: \"kubernetes.io/projected/c72d3d28-eaa9-4a35-b7b9-5207edf40372-kube-api-access-cmj69\") pod \"kube-proxy-dmw9t\" (UID: \"c72d3d28-eaa9-4a35-b7b9-5207edf40372\") " pod="kube-system/kube-proxy-dmw9t"
	Dec 16 05:04:43 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:43.805066    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c72d3d28-eaa9-4a35-b7b9-5207edf40372-xtables-lock\") pod \"kube-proxy-dmw9t\" (UID: \"c72d3d28-eaa9-4a35-b7b9-5207edf40372\") " pod="kube-system/kube-proxy-dmw9t"
	Dec 16 05:04:45 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:45.041752    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dmw9t" podStartSLOduration=2.041729858 podStartE2EDuration="2.041729858s" podCreationTimestamp="2025-12-16 05:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:45.033046646 +0000 UTC m=+7.147412046" watchObservedRunningTime="2025-12-16 05:04:45.041729858 +0000 UTC m=+7.156095273"
	Dec 16 05:04:45 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:45.050272    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xml94" podStartSLOduration=2.050246754 podStartE2EDuration="2.050246754s" podCreationTimestamp="2025-12-16 05:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:45.050151193 +0000 UTC m=+7.164516596" watchObservedRunningTime="2025-12-16 05:04:45.050246754 +0000 UTC m=+7.164612156"
	Dec 16 05:04:54 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:54.586180    1363 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 16 05:04:54 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:54.687328    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c610230-fb3c-4bef-a6d3-8870276ed93e-config-volume\") pod \"coredns-66bc5c9577-65x5z\" (UID: \"8c610230-fb3c-4bef-a6d3-8870276ed93e\") " pod="kube-system/coredns-66bc5c9577-65x5z"
	Dec 16 05:04:54 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:54.687389    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk7c9\" (UniqueName: \"kubernetes.io/projected/8c610230-fb3c-4bef-a6d3-8870276ed93e-kube-api-access-wk7c9\") pod \"coredns-66bc5c9577-65x5z\" (UID: \"8c610230-fb3c-4bef-a6d3-8870276ed93e\") " pod="kube-system/coredns-66bc5c9577-65x5z"
	Dec 16 05:04:54 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:54.687424    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e819c1d3-f079-4b3e-a4b1-9d2cfbf37344-tmp\") pod \"storage-provisioner\" (UID: \"e819c1d3-f079-4b3e-a4b1-9d2cfbf37344\") " pod="kube-system/storage-provisioner"
	Dec 16 05:04:54 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:54.687451    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mpkl\" (UniqueName: \"kubernetes.io/projected/e819c1d3-f079-4b3e-a4b1-9d2cfbf37344-kube-api-access-8mpkl\") pod \"storage-provisioner\" (UID: \"e819c1d3-f079-4b3e-a4b1-9d2cfbf37344\") " pod="kube-system/storage-provisioner"
	Dec 16 05:04:55 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:55.063272    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-65x5z" podStartSLOduration=12.063248681 podStartE2EDuration="12.063248681s" podCreationTimestamp="2025-12-16 05:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:55.063036176 +0000 UTC m=+17.177401578" watchObservedRunningTime="2025-12-16 05:04:55.063248681 +0000 UTC m=+17.177614082"
	Dec 16 05:04:55 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:55.063396    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.063386785 podStartE2EDuration="12.063386785s" podCreationTimestamp="2025-12-16 05:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:04:55.054046789 +0000 UTC m=+17.168412189" watchObservedRunningTime="2025-12-16 05:04:55.063386785 +0000 UTC m=+17.177752186"
	Dec 16 05:04:57 default-k8s-diff-port-375252 kubelet[1363]: I1216 05:04:57.102239    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7bn\" (UniqueName: \"kubernetes.io/projected/c17e2533-1c59-43f6-b58d-8825c7f2d554-kube-api-access-tm7bn\") pod \"busybox\" (UID: \"c17e2533-1c59-43f6-b58d-8825c7f2d554\") " pod="default/busybox"
	
	
	==> storage-provisioner [285d126ff1ce4aa196dcdd4177f252143a4ea859b6315aa125c516a24e07d5cd] <==
	I1216 05:04:54.977304       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:04:54.985502       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:04:54.985596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:04:54.988096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:54.994110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:04:54.994286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:04:54.994417       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1dfb198-3305-4c9d-9019-8ec75faa390c", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-375252_51be0f53-eb03-4404-a465-2bf2ba71d9e5 became leader
	I1216 05:04:54.994480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-375252_51be0f53-eb03-4404-a465-2bf2ba71d9e5!
	W1216 05:04:54.996603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:54.999552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:04:55.095090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-375252_51be0f53-eb03-4404-a465-2bf2ba71d9e5!
	W1216 05:04:57.003435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:57.007876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:59.012064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:04:59.017423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:01.020808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:01.024672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:03.028933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:03.034832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:05.038670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:05.043278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:07.046995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:07.058068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-375252 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-545075 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-545075 --alsologtostderr -v=1: exit status 80 (1.873248298s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-545075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:05:16.286688  290379 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:16.286804  290379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:16.286819  290379 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:16.286825  290379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:16.287140  290379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:16.287475  290379 out.go:368] Setting JSON to false
	I1216 05:05:16.287493  290379 mustload.go:66] Loading cluster: old-k8s-version-545075
	I1216 05:05:16.288129  290379 config.go:182] Loaded profile config "old-k8s-version-545075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:05:16.288719  290379 cli_runner.go:164] Run: docker container inspect old-k8s-version-545075 --format={{.State.Status}}
	I1216 05:05:16.312910  290379 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:05:16.313275  290379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:16.387093  290379 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-16 05:05:16.37401131 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:16.387974  290379 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-545075 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 05:05:16.390457  290379 out.go:179] * Pausing node old-k8s-version-545075 ... 
	I1216 05:05:16.391593  290379 host.go:66] Checking if "old-k8s-version-545075" exists ...
	I1216 05:05:16.391935  290379 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:16.391994  290379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545075
	I1216 05:05:16.414150  290379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/old-k8s-version-545075/id_rsa Username:docker}
	I1216 05:05:16.518677  290379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:16.550102  290379 pause.go:52] kubelet running: true
	I1216 05:05:16.550175  290379 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:16.770642  290379 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:16.770769  290379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:16.856495  290379 cri.go:89] found id: "d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4"
	I1216 05:05:16.856519  290379 cri.go:89] found id: "76ae5f75a2c9c80d756790c824b41bf144abd8148eaeb535a627919c1efa7675"
	I1216 05:05:16.856525  290379 cri.go:89] found id: "a6c3f822d7671ca410f51914f238e079a6de421c2cd37d8725a605390702b0ff"
	I1216 05:05:16.856530  290379 cri.go:89] found id: "ac47c1b8c4f01cb1fbcb3874ad2386605c553ce37f62c41a839967cc9e9f24d9"
	I1216 05:05:16.856535  290379 cri.go:89] found id: "926327055a53dcd7c24cb4836a69ddf126c05855312a52841c42a80393b3226a"
	I1216 05:05:16.856541  290379 cri.go:89] found id: "01c4e82517b36d1f66b8f2d1adb4883fc3cc6a22d531ab5783d092744f69a8d2"
	I1216 05:05:16.856545  290379 cri.go:89] found id: "5aeeae98995f110f455b137eea822f077ed8722bcfac6f2d52cb7baa211a4f13"
	I1216 05:05:16.856550  290379 cri.go:89] found id: "31f7764ec0af58c2787749a48d903be38a43b0773a2a9b369902c7942218dc65"
	I1216 05:05:16.856554  290379 cri.go:89] found id: "af4a752092a6dfab68baa25810fa6bf8af6321385deaa1ca39c71f745e63993c"
	I1216 05:05:16.856572  290379 cri.go:89] found id: "5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7"
	I1216 05:05:16.856580  290379 cri.go:89] found id: "1f44c8595e54b12a159c20e93ca46146d14d3cb2b264a6dddf302b7d3be69f68"
	I1216 05:05:16.856584  290379 cri.go:89] found id: ""
	I1216 05:05:16.856628  290379 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:16.871827  290379 retry.go:31] will retry after 195.856697ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:16Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:17.068271  290379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:17.081104  290379 pause.go:52] kubelet running: false
	I1216 05:05:17.081174  290379 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:17.222006  290379 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:17.222117  290379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:17.293098  290379 cri.go:89] found id: "d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4"
	I1216 05:05:17.293117  290379 cri.go:89] found id: "76ae5f75a2c9c80d756790c824b41bf144abd8148eaeb535a627919c1efa7675"
	I1216 05:05:17.293121  290379 cri.go:89] found id: "a6c3f822d7671ca410f51914f238e079a6de421c2cd37d8725a605390702b0ff"
	I1216 05:05:17.293125  290379 cri.go:89] found id: "ac47c1b8c4f01cb1fbcb3874ad2386605c553ce37f62c41a839967cc9e9f24d9"
	I1216 05:05:17.293128  290379 cri.go:89] found id: "926327055a53dcd7c24cb4836a69ddf126c05855312a52841c42a80393b3226a"
	I1216 05:05:17.293131  290379 cri.go:89] found id: "01c4e82517b36d1f66b8f2d1adb4883fc3cc6a22d531ab5783d092744f69a8d2"
	I1216 05:05:17.293134  290379 cri.go:89] found id: "5aeeae98995f110f455b137eea822f077ed8722bcfac6f2d52cb7baa211a4f13"
	I1216 05:05:17.293137  290379 cri.go:89] found id: "31f7764ec0af58c2787749a48d903be38a43b0773a2a9b369902c7942218dc65"
	I1216 05:05:17.293140  290379 cri.go:89] found id: "af4a752092a6dfab68baa25810fa6bf8af6321385deaa1ca39c71f745e63993c"
	I1216 05:05:17.293158  290379 cri.go:89] found id: "5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7"
	I1216 05:05:17.293162  290379 cri.go:89] found id: "1f44c8595e54b12a159c20e93ca46146d14d3cb2b264a6dddf302b7d3be69f68"
	I1216 05:05:17.293166  290379 cri.go:89] found id: ""
	I1216 05:05:17.293208  290379 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:17.304947  290379 retry.go:31] will retry after 517.337359ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:17Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:17.822608  290379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:17.835715  290379 pause.go:52] kubelet running: false
	I1216 05:05:17.835761  290379 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:17.989443  290379 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:17.989550  290379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:18.062513  290379 cri.go:89] found id: "d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4"
	I1216 05:05:18.062538  290379 cri.go:89] found id: "76ae5f75a2c9c80d756790c824b41bf144abd8148eaeb535a627919c1efa7675"
	I1216 05:05:18.062544  290379 cri.go:89] found id: "a6c3f822d7671ca410f51914f238e079a6de421c2cd37d8725a605390702b0ff"
	I1216 05:05:18.062549  290379 cri.go:89] found id: "ac47c1b8c4f01cb1fbcb3874ad2386605c553ce37f62c41a839967cc9e9f24d9"
	I1216 05:05:18.062554  290379 cri.go:89] found id: "926327055a53dcd7c24cb4836a69ddf126c05855312a52841c42a80393b3226a"
	I1216 05:05:18.062559  290379 cri.go:89] found id: "01c4e82517b36d1f66b8f2d1adb4883fc3cc6a22d531ab5783d092744f69a8d2"
	I1216 05:05:18.062564  290379 cri.go:89] found id: "5aeeae98995f110f455b137eea822f077ed8722bcfac6f2d52cb7baa211a4f13"
	I1216 05:05:18.062568  290379 cri.go:89] found id: "31f7764ec0af58c2787749a48d903be38a43b0773a2a9b369902c7942218dc65"
	I1216 05:05:18.062572  290379 cri.go:89] found id: "af4a752092a6dfab68baa25810fa6bf8af6321385deaa1ca39c71f745e63993c"
	I1216 05:05:18.062580  290379 cri.go:89] found id: "5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7"
	I1216 05:05:18.062584  290379 cri.go:89] found id: "1f44c8595e54b12a159c20e93ca46146d14d3cb2b264a6dddf302b7d3be69f68"
	I1216 05:05:18.062588  290379 cri.go:89] found id: ""
	I1216 05:05:18.062631  290379 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:18.077484  290379 out.go:203] 
	W1216 05:05:18.078836  290379 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 05:05:18.078854  290379 out.go:285] * 
	* 
	W1216 05:05:18.084432  290379 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:05:18.087126  290379 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-545075 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-545075
helpers_test.go:244: (dbg) docker inspect old-k8s-version-545075:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa",
	        "Created": "2025-12-16T05:02:59.987474092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 275525,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:04:13.654824769Z",
	            "FinishedAt": "2025-12-16T05:04:12.679746004Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/hosts",
	        "LogPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa-json.log",
	        "Name": "/old-k8s-version-545075",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-545075:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-545075",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa",
	                "LowerDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-545075",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-545075/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-545075",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-545075",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-545075",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bd7017d38931ac59cb777bbac12b61e6f1b96f66635beb43f79dfd01f4c7a219",
	            "SandboxKey": "/var/run/docker/netns/bd7017d38931",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-545075": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "333da06948413c05c3047e735767d3346609d7c3a2cd33e3c019b11d1e8453a4",
	                    "EndpointID": "e2624061267c4b533793c173c69e1bcffa469e9a85818b6f8427f35b5e1c7d16",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f6:ac:ac:ad:33:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-545075",
	                        "670e802935b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075: exit status 2 (320.231292ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-545075 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-545075 logs -n 25: (1.087857777s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p cert-expiration-123545                                                                                                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	│ stop    │ -p old-k8s-version-545075 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                  │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                               │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                               │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:04:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:04:58.807546  285878 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:04:58.807646  285878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:58.807658  285878 out.go:374] Setting ErrFile to fd 2...
	I1216 05:04:58.807663  285878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:58.807870  285878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:04:58.808307  285878 out.go:368] Setting JSON to false
	I1216 05:04:58.809533  285878 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2850,"bootTime":1765858649,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:04:58.809586  285878 start.go:143] virtualization: kvm guest
	I1216 05:04:58.811715  285878 out.go:179] * [embed-certs-127599] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:04:58.813202  285878 notify.go:221] Checking for updates...
	I1216 05:04:58.813273  285878 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:04:58.814671  285878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:04:58.816055  285878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:58.817240  285878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:04:58.818360  285878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:04:58.819590  285878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:04:58.821480  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:04:58.822299  285878 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:04:58.848069  285878 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:04:58.848202  285878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:58.906263  285878 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:04:58.896349391 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:58.906399  285878 docker.go:319] overlay module found
	I1216 05:04:58.908347  285878 out.go:179] * Using the docker driver based on existing profile
	I1216 05:04:58.909504  285878 start.go:309] selected driver: docker
	I1216 05:04:58.909519  285878 start.go:927] validating driver "docker" against &{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:58.909604  285878 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:04:58.910114  285878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:58.971198  285878 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:04:58.959512916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:58.971557  285878 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:58.971608  285878 cni.go:84] Creating CNI manager for ""
	I1216 05:04:58.971676  285878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:58.971782  285878 start.go:353] cluster config:
	{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:58.974821  285878 out.go:179] * Starting "embed-certs-127599" primary control-plane node in "embed-certs-127599" cluster
	I1216 05:04:58.976420  285878 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:04:58.978154  285878 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:04:58.980043  285878 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:58.980116  285878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:04:58.980120  285878 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:04:58.980154  285878 cache.go:65] Caching tarball of preloaded images
	I1216 05:04:58.980280  285878 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:04:58.980291  285878 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:04:58.980419  285878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json ...
	I1216 05:04:59.006359  285878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:04:59.006379  285878 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:04:59.006397  285878 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:04:59.006434  285878 start.go:360] acquireMachinesLock for embed-certs-127599: {Name:mk782536a8f26bd7063e28ead9562fcba9d13ffe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:04:59.006494  285878 start.go:364] duration metric: took 39.093µs to acquireMachinesLock for "embed-certs-127599"
	I1216 05:04:59.006518  285878 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:04:59.006527  285878 fix.go:54] fixHost starting: 
	I1216 05:04:59.006819  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:04:59.030521  285878 fix.go:112] recreateIfNeeded on embed-certs-127599: state=Stopped err=<nil>
	W1216 05:04:59.030575  285878 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:04:58.146615  284202 cli_runner.go:164] Run: docker network inspect no-preload-990631 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:58.164702  284202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 05:04:58.168903  284202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:58.179164  284202 kubeadm.go:884] updating cluster {Name:no-preload-990631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:04:58.179295  284202 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:04:58.179337  284202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:58.211615  284202 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:58.211639  284202 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:04:58.211647  284202 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:04:58.211734  284202 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-990631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:04:58.211797  284202 ssh_runner.go:195] Run: crio config
	I1216 05:04:58.256766  284202 cni.go:84] Creating CNI manager for ""
	I1216 05:04:58.256789  284202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:58.256801  284202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:04:58.256822  284202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-990631 NodeName:no-preload-990631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:04:58.256971  284202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-990631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:04:58.257052  284202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:04:58.265699  284202 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:04:58.265763  284202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:04:58.273700  284202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1216 05:04:58.286894  284202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:04:58.300035  284202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1216 05:04:58.313614  284202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:04:58.317210  284202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:58.327214  284202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:58.414343  284202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:58.443043  284202 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631 for IP: 192.168.103.2
	I1216 05:04:58.443061  284202 certs.go:195] generating shared ca certs ...
	I1216 05:04:58.443081  284202 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:58.443293  284202 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:04:58.443372  284202 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:04:58.443391  284202 certs.go:257] generating profile certs ...
	I1216 05:04:58.443497  284202 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.key
	I1216 05:04:58.443575  284202 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key.44083b8e
	I1216 05:04:58.443635  284202 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key
	I1216 05:04:58.443842  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:04:58.443905  284202 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:04:58.443919  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:04:58.443943  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:04:58.443970  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:04:58.443999  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:04:58.444082  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:58.444854  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:04:58.475449  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:04:58.495391  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:04:58.517175  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:04:58.549655  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:04:58.572549  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:04:58.596524  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:04:58.618088  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:04:58.640733  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:04:58.663625  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:04:58.682127  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:04:58.700365  284202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:04:58.715543  284202 ssh_runner.go:195] Run: openssl version
	I1216 05:04:58.722348  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.729959  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:04:58.738355  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.742405  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.742457  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.787292  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:04:58.796645  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.804122  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:04:58.812151  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.816028  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.816079  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.855153  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:04:58.863425  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.873487  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:04:58.882811  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.887176  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.887238  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.926865  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:58.937699  284202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:04:58.942829  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:04:58.990450  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:04:59.040055  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:04:59.099401  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:04:59.157276  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:04:59.215120  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:04:59.258742  284202 kubeadm.go:401] StartCluster: {Name:no-preload-990631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:59.258841  284202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:04:59.258895  284202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:04:59.292160  284202 cri.go:89] found id: "4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5"
	I1216 05:04:59.292185  284202 cri.go:89] found id: "c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637"
	I1216 05:04:59.292192  284202 cri.go:89] found id: "3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5"
	I1216 05:04:59.292196  284202 cri.go:89] found id: "f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9"
	I1216 05:04:59.292201  284202 cri.go:89] found id: ""
	I1216 05:04:59.292247  284202 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:04:59.307274  284202 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:04:59Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:04:59.307344  284202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:04:59.316697  284202 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:04:59.316715  284202 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:04:59.316758  284202 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:04:59.327769  284202 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:04:59.328615  284202 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-990631" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:59.329102  284202 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-990631" cluster setting kubeconfig missing "no-preload-990631" context setting]
	I1216 05:04:59.329859  284202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.331782  284202 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:04:59.341505  284202 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1216 05:04:59.341536  284202 kubeadm.go:602] duration metric: took 24.814841ms to restartPrimaryControlPlane
	I1216 05:04:59.341546  284202 kubeadm.go:403] duration metric: took 82.81371ms to StartCluster
	I1216 05:04:59.341562  284202 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.341627  284202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:59.343176  284202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.343399  284202 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:04:59.343621  284202 config.go:182] Loaded profile config "no-preload-990631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:04:59.343681  284202 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:04:59.343779  284202 addons.go:70] Setting storage-provisioner=true in profile "no-preload-990631"
	I1216 05:04:59.343815  284202 addons.go:239] Setting addon storage-provisioner=true in "no-preload-990631"
	W1216 05:04:59.343827  284202 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:04:59.343872  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.343982  284202 addons.go:70] Setting dashboard=true in profile "no-preload-990631"
	I1216 05:04:59.344000  284202 addons.go:70] Setting default-storageclass=true in profile "no-preload-990631"
	I1216 05:04:59.344031  284202 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-990631"
	I1216 05:04:59.344014  284202 addons.go:239] Setting addon dashboard=true in "no-preload-990631"
	W1216 05:04:59.344156  284202 addons.go:248] addon dashboard should already be in state true
	I1216 05:04:59.344188  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.344358  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.344495  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.344647  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.347229  284202 out.go:179] * Verifying Kubernetes components...
	I1216 05:04:59.349480  284202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:59.373084  284202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:04:59.373093  284202 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:04:59.373127  284202 addons.go:239] Setting addon default-storageclass=true in "no-preload-990631"
	W1216 05:04:59.373249  284202 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:04:59.373317  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.373809  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.374487  284202 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:59.374504  284202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:04:59.374557  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.376102  284202 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:04:59.377081  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:04:59.377099  284202 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:04:59.377145  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.399402  284202 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:59.399426  284202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:04:59.399475  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.404857  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.417430  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.431357  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.515049  284202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:59.526392  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:59.531687  284202 node_ready.go:35] waiting up to 6m0s for node "no-preload-990631" to be "Ready" ...
	I1216 05:04:59.533132  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:04:59.533157  284202 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:04:59.538389  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:59.549617  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:04:59.549639  284202 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:04:59.569671  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:04:59.569760  284202 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:04:59.593378  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:04:59.593405  284202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:04:59.621948  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:04:59.621972  284202 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:04:59.644345  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:04:59.644448  284202 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:04:59.661945  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:04:59.661975  284202 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:04:59.678222  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:04:59.678252  284202 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:04:59.700122  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:04:59.700148  284202 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:04:59.719777  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:00.484236  284202 node_ready.go:49] node "no-preload-990631" is "Ready"
	I1216 05:05:00.484283  284202 node_ready.go:38] duration metric: took 952.560787ms for node "no-preload-990631" to be "Ready" ...
	I1216 05:05:00.484302  284202 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:00.484360  284202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:01.061883  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.535448575s)
	I1216 05:05:01.061947  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.523532435s)
	I1216 05:05:01.062100  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.342232728s)
	I1216 05:05:01.062128  284202 api_server.go:72] duration metric: took 1.718698599s to wait for apiserver process to appear ...
	I1216 05:05:01.062138  284202 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:01.062165  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:01.064061  284202 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-990631 addons enable metrics-server
	
	I1216 05:05:01.067391  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:01.067413  284202 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:01.069255  284202 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 05:05:01.070615  284202 addons.go:530] duration metric: took 1.726932128s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:05:01.563168  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:01.567835  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:01.567870  284202 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:02.062462  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:02.066872  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 05:05:02.068217  284202 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:05:02.068239  284202 api_server.go:131] duration metric: took 1.006094275s to wait for apiserver health ...
	I1216 05:05:02.068247  284202 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:02.072657  284202 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:02.072700  284202 system_pods.go:61] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:02.072712  284202 system_pods.go:61] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:02.072720  284202 system_pods.go:61] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:05:02.072735  284202 system_pods.go:61] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:02.072793  284202 system_pods.go:61] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:02.072801  284202 system_pods.go:61] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:05:02.072815  284202 system_pods.go:61] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:02.072820  284202 system_pods.go:61] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Running
	I1216 05:05:02.072827  284202 system_pods.go:74] duration metric: took 4.575509ms to wait for pod list to return data ...
	I1216 05:05:02.072835  284202 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:02.075526  284202 default_sa.go:45] found service account: "default"
	I1216 05:05:02.075545  284202 default_sa.go:55] duration metric: took 2.704707ms for default service account to be created ...
	I1216 05:05:02.075552  284202 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:02.078045  284202 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:02.078069  284202 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:02.078076  284202 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:02.078081  284202 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:05:02.078087  284202 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:02.078098  284202 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:02.078105  284202 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:05:02.078113  284202 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:02.078120  284202 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Running
	I1216 05:05:02.078129  284202 system_pods.go:126] duration metric: took 2.571444ms to wait for k8s-apps to be running ...
	I1216 05:05:02.078140  284202 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:02.078198  284202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:02.090955  284202 system_svc.go:56] duration metric: took 12.80557ms WaitForService to wait for kubelet
	I1216 05:05:02.090982  284202 kubeadm.go:587] duration metric: took 2.74755423s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:02.090999  284202 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:02.093755  284202 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:02.093776  284202 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:02.093791  284202 node_conditions.go:105] duration metric: took 2.787822ms to run NodePressure ...
	I1216 05:05:02.093801  284202 start.go:242] waiting for startup goroutines ...
	I1216 05:05:02.093808  284202 start.go:247] waiting for cluster config update ...
	I1216 05:05:02.093821  284202 start.go:256] writing updated cluster config ...
	I1216 05:05:02.094117  284202 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:02.097851  284202 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:02.100920  284202 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:00.516383  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	I1216 05:05:03.014998  275199 pod_ready.go:94] pod "coredns-5dd5756b68-fplzg" is "Ready"
	I1216 05:05:03.015051  275199 pod_ready.go:86] duration metric: took 36.505880263s for pod "coredns-5dd5756b68-fplzg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.018010  275199 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.022411  275199 pod_ready.go:94] pod "etcd-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.022432  275199 pod_ready.go:86] duration metric: took 4.39181ms for pod "etcd-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.025267  275199 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.029715  275199 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.029741  275199 pod_ready.go:86] duration metric: took 4.454401ms for pod "kube-apiserver-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.032803  275199 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.212694  275199 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.212718  275199 pod_ready.go:86] duration metric: took 179.891662ms for pod "kube-controller-manager-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:59.034761  285878 out.go:252] * Restarting existing docker container for "embed-certs-127599" ...
	I1216 05:04:59.034846  285878 cli_runner.go:164] Run: docker start embed-certs-127599
	I1216 05:04:59.342519  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:04:59.369236  285878 kic.go:430] container "embed-certs-127599" state is running.
	I1216 05:04:59.369666  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:04:59.398479  285878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json ...
	I1216 05:04:59.398806  285878 machine.go:94] provisionDockerMachine start ...
	I1216 05:04:59.398910  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:04:59.434387  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:59.434715  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:04:59.434744  285878 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:04:59.435373  285878 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57022->127.0.0.1:33091: read: connection reset by peer
	I1216 05:05:02.565577  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-127599
	
	I1216 05:05:02.565604  285878 ubuntu.go:182] provisioning hostname "embed-certs-127599"
	I1216 05:05:02.565682  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.585360  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:02.585581  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:02.585595  285878 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-127599 && echo "embed-certs-127599" | sudo tee /etc/hostname
	I1216 05:05:02.721382  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-127599
	
	I1216 05:05:02.721466  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.739517  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:02.739721  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:02.739737  285878 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-127599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-127599/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-127599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:02.866626  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:02.866657  285878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:02.866676  285878 ubuntu.go:190] setting up certificates
	I1216 05:05:02.866687  285878 provision.go:84] configureAuth start
	I1216 05:05:02.866760  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:05:02.884961  285878 provision.go:143] copyHostCerts
	I1216 05:05:02.885047  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:02.885066  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:02.885138  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:02.885238  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:02.885250  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:02.885279  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:02.885346  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:02.885354  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:02.885376  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:02.885439  285878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.embed-certs-127599 san=[127.0.0.1 192.168.94.2 embed-certs-127599 localhost minikube]
	I1216 05:05:02.905981  285878 provision.go:177] copyRemoteCerts
	I1216 05:05:02.906048  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:02.906098  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.925287  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.020273  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:03.041871  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:05:03.060678  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:05:03.078064  285878 provision.go:87] duration metric: took 211.357317ms to configureAuth
	I1216 05:05:03.078092  285878 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:03.078242  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:03.078323  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.096032  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:03.096295  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:03.096316  285878 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:03.415229  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:03.415249  285878 machine.go:97] duration metric: took 4.016422304s to provisionDockerMachine
	I1216 05:05:03.415264  285878 start.go:293] postStartSetup for "embed-certs-127599" (driver="docker")
	I1216 05:05:03.415277  285878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:03.415379  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:03.415436  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.434727  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.526138  285878 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:03.529645  285878 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:03.529676  285878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:03.529688  285878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:03.529745  285878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:03.529814  285878 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:03.529894  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:03.537519  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:03.555384  285878 start.go:296] duration metric: took 140.103941ms for postStartSetup
	I1216 05:05:03.555454  285878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:03.555498  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.574541  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.671209  285878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:03.675704  285878 fix.go:56] duration metric: took 4.669173594s for fixHost
	I1216 05:05:03.675732  285878 start.go:83] releasing machines lock for "embed-certs-127599", held for 4.669223523s
	I1216 05:05:03.675807  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:05:03.693006  285878 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:03.693089  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.693130  285878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:03.693193  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.711276  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.715049  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.415146  275199 pod_ready.go:83] waiting for pod "kube-proxy-7pldc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.813274  275199 pod_ready.go:94] pod "kube-proxy-7pldc" is "Ready"
	I1216 05:05:03.813309  275199 pod_ready.go:86] duration metric: took 398.130225ms for pod "kube-proxy-7pldc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.016765  275199 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.412999  275199 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-545075" is "Ready"
	I1216 05:05:04.413055  275199 pod_ready.go:86] duration metric: took 396.261871ms for pod "kube-scheduler-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.413072  275199 pod_ready.go:40] duration metric: took 37.908213005s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:04.460171  275199 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1216 05:05:04.461837  275199 out.go:203] 
	W1216 05:05:04.462951  275199 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1216 05:05:04.464050  275199 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1216 05:05:04.465161  275199 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-545075" cluster and "default" namespace by default
	I1216 05:05:03.870367  285878 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:03.877245  285878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:03.911801  285878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:03.916421  285878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:03.916499  285878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:03.924729  285878 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:05:03.924754  285878 start.go:496] detecting cgroup driver to use...
	I1216 05:05:03.924789  285878 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:03.924826  285878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:03.938355  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:03.950687  285878 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:03.950755  285878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:03.966322  285878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:03.978654  285878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:04.067259  285878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:04.157828  285878 docker.go:234] disabling docker service ...
	I1216 05:05:04.157924  285878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:04.172457  285878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:04.185242  285878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:04.268722  285878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:04.352800  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:04.369946  285878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:04.389991  285878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:04.390111  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.402275  285878 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:04.402346  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.413804  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.423233  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.433098  285878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:04.441987  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.452342  285878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.461567  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.470671  285878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:04.478121  285878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:04.485428  285878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:04.584484  285878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:04.714916  285878 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:04.715042  285878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:04.719134  285878 start.go:564] Will wait 60s for crictl version
	I1216 05:05:04.719175  285878 ssh_runner.go:195] Run: which crictl
	I1216 05:05:04.722694  285878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:04.750409  285878 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:04.750485  285878 ssh_runner.go:195] Run: crio --version
	I1216 05:05:04.786143  285878 ssh_runner.go:195] Run: crio --version
	I1216 05:05:04.822457  285878 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:05:04.823609  285878 cli_runner.go:164] Run: docker network inspect embed-certs-127599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:04.842823  285878 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:04.847000  285878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:04.857344  285878 kubeadm.go:884] updating cluster {Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:04.857466  285878 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:04.857593  285878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:04.895769  285878 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:04.895807  285878 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:04.895868  285878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:04.930726  285878 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:04.930758  285878 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:04.930767  285878 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1216 05:05:04.930907  285878 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-127599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:04.931008  285878 ssh_runner.go:195] Run: crio config
	I1216 05:05:04.997735  285878 cni.go:84] Creating CNI manager for ""
	I1216 05:05:04.997762  285878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:04.997780  285878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:05:04.997819  285878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-127599 NodeName:embed-certs-127599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:04.997960  285878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-127599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:04.998048  285878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:05:05.008874  285878 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:05.008944  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:05.020305  285878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1216 05:05:05.037984  285878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:05:05.055161  285878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1216 05:05:05.074450  285878 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:05.080583  285878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:05.094342  285878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:05.222260  285878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:05.249350  285878 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599 for IP: 192.168.94.2
	I1216 05:05:05.249372  285878 certs.go:195] generating shared ca certs ...
	I1216 05:05:05.249392  285878 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:05.249576  285878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:05.249640  285878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:05.249652  285878 certs.go:257] generating profile certs ...
	I1216 05:05:05.249786  285878 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/client.key
	I1216 05:05:05.249879  285878 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/apiserver.key.8323bd4e
	I1216 05:05:05.249938  285878 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/proxy-client.key
	I1216 05:05:05.250125  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:05.250179  285878 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:05.250196  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:05.250239  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:05.250282  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:05.250325  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:05.250394  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:05.251932  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:05.279924  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:05.308114  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:05.340146  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:05.375339  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 05:05:05.400675  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:05:05.426336  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:05.456123  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:05:05.485328  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:05.514764  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:05.538413  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:05.562818  285878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:05.581530  285878 ssh_runner.go:195] Run: openssl version
	I1216 05:05:05.590154  285878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.601115  285878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:05.613737  285878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.619334  285878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.619387  285878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.675958  285878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:05.685906  285878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.696414  285878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:05.708508  285878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.713636  285878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.713711  285878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.769068  285878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:05.779303  285878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.789116  285878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:05.799879  285878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.805200  285878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.805252  285878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.865108  285878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:05.876011  285878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:05.880721  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:05:05.952652  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:05:06.022258  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:05:06.085243  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:05:06.147255  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:05:06.204474  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:05:06.268071  285878 kubeadm.go:401] StartCluster: {Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:06.268176  285878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:06.268232  285878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:06.302937  285878 cri.go:89] found id: "5aba47586bca5359899a12179f858df845d0a3d7ff35738262380c0dc7df2808"
	I1216 05:05:06.302965  285878 cri.go:89] found id: "9470750a66d86139939d4b618522b12a7fec97282112773930ad51d8f35b0afe"
	I1216 05:05:06.302971  285878 cri.go:89] found id: "aaeef0aba55ec8b43be948ea78142e157863fc7464359991825255d4956d31c7"
	I1216 05:05:06.302977  285878 cri.go:89] found id: "40939f377b2ea50bcacad648ad4f8c9351d77ec9c17f14735f75e1fce438fe46"
	I1216 05:05:06.302982  285878 cri.go:89] found id: ""
	I1216 05:05:06.303043  285878 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:05:06.317237  285878 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:06Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:06.317304  285878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:06.326777  285878 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:05:06.326798  285878 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:05:06.326835  285878 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:05:06.336136  285878 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:05:06.336879  285878 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-127599" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:06.337334  285878 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-127599" cluster setting kubeconfig missing "embed-certs-127599" context setting]
	I1216 05:05:06.337902  285878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:06.339927  285878 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:05:06.350230  285878 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1216 05:05:06.350271  285878 kubeadm.go:602] duration metric: took 23.46779ms to restartPrimaryControlPlane
	I1216 05:05:06.350281  285878 kubeadm.go:403] duration metric: took 82.219997ms to StartCluster
	I1216 05:05:06.350299  285878 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:06.350363  285878 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:06.352595  285878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:06.352830  285878 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:06.353025  285878 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:06.353201  285878 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-127599"
	I1216 05:05:06.353225  285878 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-127599"
	W1216 05:05:06.353234  285878 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:05:06.353242  285878 addons.go:70] Setting dashboard=true in profile "embed-certs-127599"
	I1216 05:05:06.353273  285878 addons.go:239] Setting addon dashboard=true in "embed-certs-127599"
	I1216 05:05:06.353282  285878 addons.go:70] Setting default-storageclass=true in profile "embed-certs-127599"
	W1216 05:05:06.353287  285878 addons.go:248] addon dashboard should already be in state true
	I1216 05:05:06.353299  285878 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-127599"
	I1216 05:05:06.353328  285878 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:06.353349  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:06.353273  285878 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:06.353716  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.353793  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.353983  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.358089  285878 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:06.361102  285878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:06.382904  285878 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:06.384054  285878 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:05:06.384079  285878 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:06.384096  285878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:06.384165  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:06.385061  285878 addons.go:239] Setting addon default-storageclass=true in "embed-certs-127599"
	W1216 05:05:06.385092  285878 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:05:06.385123  285878 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:06.385865  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.386297  285878 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1216 05:05:04.107167  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:06.109608  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:06.387112  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:05:06.387166  285878 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:05:06.387263  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:06.415104  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:06.418043  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:06.434363  285878 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:06.434389  285878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:06.434448  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:06.467298  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:06.567848  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:05:06.567877  285878 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:05:06.568462  285878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:06.585004  285878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:06.588783  285878 node_ready.go:35] waiting up to 6m0s for node "embed-certs-127599" to be "Ready" ...
	I1216 05:05:06.596291  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:05:06.596320  285878 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:05:06.619562  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:05:06.619588  285878 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:05:06.619974  285878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:06.646359  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:05:06.646387  285878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:05:06.673497  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:05:06.673530  285878 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:05:06.698759  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:05:06.698796  285878 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:05:06.722359  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:05:06.722383  285878 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:05:06.739863  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:05:06.739889  285878 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:05:06.760730  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:06.760840  285878 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:05:06.780963  285878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:07.960700  285878 node_ready.go:49] node "embed-certs-127599" is "Ready"
	I1216 05:05:07.960749  285878 node_ready.go:38] duration metric: took 1.371929833s for node "embed-certs-127599" to be "Ready" ...
	I1216 05:05:07.960766  285878 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:07.960923  285878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:08.754763  285878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.169687979s)
	I1216 05:05:08.754838  285878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.134840002s)
	I1216 05:05:08.755278  285878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.974260218s)
	I1216 05:05:08.755608  285878 api_server.go:72] duration metric: took 2.402740073s to wait for apiserver process to appear ...
	I1216 05:05:08.755623  285878 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:08.755642  285878 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 05:05:08.757738  285878 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-127599 addons enable metrics-server
	
	I1216 05:05:08.763784  285878 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:08.763817  285878 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:08.775569  285878 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 05:05:08.776663  285878 addons.go:530] duration metric: took 2.423654258s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1216 05:05:08.607566  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:11.105819  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:09.256465  285878 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 05:05:09.261737  285878 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 05:05:09.262830  285878 api_server.go:141] control plane version: v1.34.2
	I1216 05:05:09.262856  285878 api_server.go:131] duration metric: took 507.225223ms to wait for apiserver health ...
	I1216 05:05:09.262867  285878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:09.266811  285878 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:09.266863  285878 system_pods.go:61] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:09.266888  285878 system_pods.go:61] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:09.266899  285878 system_pods.go:61] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:09.266913  285878 system_pods.go:61] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:09.266923  285878 system_pods.go:61] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:09.266935  285878 system_pods.go:61] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:09.266947  285878 system_pods.go:61] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:09.266959  285878 system_pods.go:61] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:09.266973  285878 system_pods.go:74] duration metric: took 4.098506ms to wait for pod list to return data ...
	I1216 05:05:09.266989  285878 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:09.269276  285878 default_sa.go:45] found service account: "default"
	I1216 05:05:09.269293  285878 default_sa.go:55] duration metric: took 2.297068ms for default service account to be created ...
	I1216 05:05:09.269303  285878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:09.271995  285878 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:09.272057  285878 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:09.272077  285878 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:09.272106  285878 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:09.272132  285878 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:09.272148  285878 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:09.272159  285878 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:09.272169  285878 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:09.272179  285878 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:09.272189  285878 system_pods.go:126] duration metric: took 2.878527ms to wait for k8s-apps to be running ...
	I1216 05:05:09.272200  285878 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:09.272253  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:09.289320  285878 system_svc.go:56] duration metric: took 17.114578ms WaitForService to wait for kubelet
	I1216 05:05:09.289346  285878 kubeadm.go:587] duration metric: took 2.93648066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:09.289361  285878 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:09.293536  285878 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:09.293575  285878 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:09.293588  285878 node_conditions.go:105] duration metric: took 4.222572ms to run NodePressure ...
	I1216 05:05:09.293599  285878 start.go:242] waiting for startup goroutines ...
	I1216 05:05:09.293606  285878 start.go:247] waiting for cluster config update ...
	I1216 05:05:09.293617  285878 start.go:256] writing updated cluster config ...
	I1216 05:05:09.293930  285878 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:09.298463  285878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:09.302479  285878 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:11.307286  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:13.308848  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:13.106844  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:15.106911  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:17.107424  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 16 05:04:45 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:45.103340055Z" level=info msg="Started container" PID=1756 containerID=1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper id=4953d307-798f-475c-8957-42dd2e9c400c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bb54ffac6141a4e9aab3b7127ff5e816f4aa3882dbd089bda90c8e44917036b
	Dec 16 05:04:46 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:46.069247685Z" level=info msg="Removing container: d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a" id=a0b995ba-0ecf-47cc-8256-3abd49a54944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:04:46 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:46.078766739Z" level=info msg="Removed container d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=a0b995ba-0ecf-47cc-8256-3abd49a54944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.093985821Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=245e57c6-a598-42bc-8f12-9d9b79a7473c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.094751503Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=72a81a74-0cd3-4fab-a291-f1f1da7ec83d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.095618426Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dda621dc-3edc-47d5-9556-efb618f2c0a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.095741974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100072394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100260381Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5a4cf9849c45739ac5962ab228597069565903c1e85416d0824f8bc75fa0c468/merged/etc/passwd: no such file or directory"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100293309Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5a4cf9849c45739ac5962ab228597069565903c1e85416d0824f8bc75fa0c468/merged/etc/group: no such file or directory"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100572313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.131153781Z" level=info msg="Created container d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4: kube-system/storage-provisioner/storage-provisioner" id=dda621dc-3edc-47d5-9556-efb618f2c0a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.131831479Z" level=info msg="Starting container: d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4" id=4838c256-22fd-4504-84cd-b19ffe8b7cc2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.133835911Z" level=info msg="Started container" PID=1771 containerID=d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4 description=kube-system/storage-provisioner/storage-provisioner id=4838c256-22fd-4504-84cd-b19ffe8b7cc2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cab3e68bf0fb2a70735abde6d4ecd368c85d337f9a4addb30709fa988b5c8b3
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.98050927Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=502413d3-586d-4b49-8666-39c504cabd98 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.981496469Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8af85ca2-b954-4c60-8804-c446fd1079cb name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.98258164Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=a6fe91d4-be4b-489b-9595-4f0de70555e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.982709727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.988966811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.989646108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.020922925Z" level=info msg="Created container 5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=a6fe91d4-be4b-489b-9595-4f0de70555e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.021611188Z" level=info msg="Starting container: 5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7" id=6847f6ad-33d7-4dce-a5ab-663c7af54ee4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.023560499Z" level=info msg="Started container" PID=1787 containerID=5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper id=6847f6ad-33d7-4dce-a5ab-663c7af54ee4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bb54ffac6141a4e9aab3b7127ff5e816f4aa3882dbd089bda90c8e44917036b
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.109222132Z" level=info msg="Removing container: 1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32" id=1caec7cc-620b-41bc-8542-b75c188bb1e2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.118619161Z" level=info msg="Removed container 1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=1caec7cc-620b-41bc-8542-b75c188bb1e2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5defdfa105d9d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   5bb54ffac6141       dashboard-metrics-scraper-5f989dc9cf-n2t2l       kubernetes-dashboard
	d3f4510bb12b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   1cab3e68bf0fb       storage-provisioner                              kube-system
	1f44c8595e54b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   30afaaff325cc       kubernetes-dashboard-8694d4445c-pv585            kubernetes-dashboard
	0ab9aa1f7ad82       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   0f03f0b8fe809       busybox                                          default
	76ae5f75a2c9c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   e6cc7098dd604       coredns-5dd5756b68-fplzg                         kube-system
	a6c3f822d7671       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   8c6e3f828cfab       kindnet-r74zw                                    kube-system
	ac47c1b8c4f01       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   0785751741b1b       kube-proxy-7pldc                                 kube-system
	926327055a53d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   1cab3e68bf0fb       storage-provisioner                              kube-system
	01c4e82517b36       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   5eec7d4fd18fa       kube-controller-manager-old-k8s-version-545075   kube-system
	5aeeae98995f1       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   1d1533d086273       kube-apiserver-old-k8s-version-545075            kube-system
	31f7764ec0af5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   9ce25643ad352       etcd-old-k8s-version-545075                      kube-system
	af4a752092a6d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   dbc5090e92c7f       kube-scheduler-old-k8s-version-545075            kube-system
	
	
	==> coredns [76ae5f75a2c9c80d756790c824b41bf144abd8148eaeb535a627919c1efa7675] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58788 - 3741 "HINFO IN 4922608715032854283.1290291217694517511. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014974135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-545075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-545075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=old-k8s-version-545075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_03_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:03:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-545075
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:05:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-545075
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                37fbaf4b-0aa1-43d5-a5d3-708d61d6a5be
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-fplzg                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-545075                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-r74zw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-545075             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-545075    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-7pldc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-545075             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-n2t2l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pv585             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                 kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                 kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                 kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node old-k8s-version-545075 event: Registered Node old-k8s-version-545075 in Controller
	  Normal  NodeReady                98s                  kubelet          Node old-k8s-version-545075 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 58s)    kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 58s)    kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 58s)    kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                  node-controller  Node old-k8s-version-545075 event: Registered Node old-k8s-version-545075 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [31f7764ec0af58c2787749a48d903be38a43b0773a2a9b369902c7942218dc65] <==
	{"level":"info","ts":"2025-12-16T05:04:22.552161Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-16T05:04:22.552172Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-16T05:04:22.552523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-16T05:04:22.552616Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-16T05:04:22.552824Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:04:22.552879Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:04:22.555858Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-16T05:04:22.556132Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-16T05:04:22.556166Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-16T05:04:22.556285Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T05:04:22.556295Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T05:04:23.844472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-16T05:04:23.844514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-16T05:04:23.844563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-16T05:04:23.84458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.84459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.844598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.844605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.846354Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-545075 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-16T05:04:23.846361Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T05:04:23.84639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T05:04:23.846602Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-16T05:04:23.846635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-16T05:04:23.847591Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-16T05:04:23.847604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 05:05:19 up 47 min,  0 user,  load average: 3.87, 2.79, 1.94
	Linux old-k8s-version-545075 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6c3f822d7671ca410f51914f238e079a6de421c2cd37d8725a605390702b0ff] <==
	I1216 05:04:25.475497       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:04:25.476156       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 05:04:25.476555       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:04:25.476578       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:04:25.476600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:04:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:04:25.775169       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:04:25.775194       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:04:25.870295       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:04:25.871356       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:04:26.271609       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:04:26.271642       1 metrics.go:72] Registering metrics
	I1216 05:04:26.271717       1 controller.go:711] "Syncing nftables rules"
	I1216 05:04:35.776059       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:04:35.776117       1 main.go:301] handling current node
	I1216 05:04:45.775714       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:04:45.775774       1 main.go:301] handling current node
	I1216 05:04:55.776001       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:04:55.776065       1 main.go:301] handling current node
	I1216 05:05:05.776108       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:05:05.776217       1 main.go:301] handling current node
	I1216 05:05:15.781151       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:05:15.781183       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5aeeae98995f110f455b137eea822f077ed8722bcfac6f2d52cb7baa211a4f13] <==
	I1216 05:04:24.854617       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1216 05:04:24.924259       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:04:24.954199       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1216 05:04:24.954230       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 05:04:24.954444       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1216 05:04:24.954253       1 shared_informer.go:318] Caches are synced for configmaps
	I1216 05:04:24.954251       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1216 05:04:24.954539       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1216 05:04:24.954309       1 aggregator.go:166] initial CRD sync complete...
	I1216 05:04:24.954653       1 autoregister_controller.go:141] Starting autoregister controller
	I1216 05:04:24.954659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:04:24.954667       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:04:24.954341       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:04:25.020456       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1216 05:04:25.846740       1 controller.go:624] quota admission added evaluator for: namespaces
	I1216 05:04:25.856277       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:04:25.879396       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1216 05:04:25.896128       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:04:25.904064       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:04:25.911165       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1216 05:04:25.946813       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.150.129"}
	I1216 05:04:25.962002       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.48.32"}
	I1216 05:04:38.146271       1 controller.go:624] quota admission added evaluator for: endpoints
	I1216 05:04:38.347624       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:04:38.446689       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [01c4e82517b36d1f66b8f2d1adb4883fc3cc6a22d531ab5783d092744f69a8d2] <==
	I1216 05:04:38.404050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.048µs"
	I1216 05:04:38.450438       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1216 05:04:38.450770       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1216 05:04:38.458295       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-n2t2l"
	I1216 05:04:38.458328       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-pv585"
	I1216 05:04:38.458829       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 05:04:38.464685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="14.707927ms"
	I1216 05:04:38.467782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.606352ms"
	I1216 05:04:38.473202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="8.471905ms"
	I1216 05:04:38.473286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.153µs"
	I1216 05:04:38.477003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.857µs"
	I1216 05:04:38.477414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.569893ms"
	I1216 05:04:38.477486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.419µs"
	I1216 05:04:38.485532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.396µs"
	I1216 05:04:38.516230       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 05:04:38.516265       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1216 05:04:42.081646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.294011ms"
	I1216 05:04:42.084625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.928µs"
	I1216 05:04:45.073883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.23µs"
	I1216 05:04:46.078186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.056µs"
	I1216 05:04:47.081152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.173µs"
	I1216 05:05:00.119549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.79µs"
	I1216 05:05:02.830758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.349493ms"
	I1216 05:05:02.830855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.788µs"
	I1216 05:05:08.781046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.897µs"
	
	
	==> kube-proxy [ac47c1b8c4f01cb1fbcb3874ad2386605c553ce37f62c41a839967cc9e9f24d9] <==
	I1216 05:04:25.365812       1 server_others.go:69] "Using iptables proxy"
	I1216 05:04:25.379770       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1216 05:04:25.401079       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:04:25.405947       1 server_others.go:152] "Using iptables Proxier"
	I1216 05:04:25.406065       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1216 05:04:25.406105       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1216 05:04:25.406180       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1216 05:04:25.406698       1 server.go:846] "Version info" version="v1.28.0"
	I1216 05:04:25.406762       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:04:25.408702       1 config.go:97] "Starting endpoint slice config controller"
	I1216 05:04:25.408747       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1216 05:04:25.411209       1 config.go:315] "Starting node config controller"
	I1216 05:04:25.411225       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1216 05:04:25.409336       1 config.go:188] "Starting service config controller"
	I1216 05:04:25.411780       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1216 05:04:25.509707       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1216 05:04:25.512879       1 shared_informer.go:318] Caches are synced for service config
	I1216 05:04:25.513468       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [af4a752092a6dfab68baa25810fa6bf8af6321385deaa1ca39c71f745e63993c] <==
	I1216 05:04:23.087249       1 serving.go:348] Generated self-signed cert in-memory
	W1216 05:04:24.879876       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:04:24.882081       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:04:24.882124       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:04:24.882135       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:04:24.908369       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1216 05:04:24.908399       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:04:24.909990       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:04:24.910030       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 05:04:24.912035       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1216 05:04:24.912093       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1216 05:04:24.914459       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E1216 05:04:24.914503       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1216 05:04:24.917525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 05:04:24.917919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1216 05:04:24.917667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 05:04:24.918350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1216 05:04:24.917794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 05:04:24.918492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1216 05:04:24.917863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 05:04:24.918568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1216 05:04:24.922219       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 05:04:24.922558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1216 05:04:26.110679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589636     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3222436e-fc8d-4cce-8235-3c3b4766d3db-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pv585\" (UID: \"3222436e-fc8d-4cce-8235-3c3b4766d3db\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv585"
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589697     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs86t\" (UniqueName: \"kubernetes.io/projected/3222436e-fc8d-4cce-8235-3c3b4766d3db-kube-api-access-qs86t\") pod \"kubernetes-dashboard-8694d4445c-pv585\" (UID: \"3222436e-fc8d-4cce-8235-3c3b4766d3db\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv585"
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589731     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f599327d-7596-48d8-ad09-635c4236b746-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-n2t2l\" (UID: \"f599327d-7596-48d8-ad09-635c4236b746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l"
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589833     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55z6\" (UniqueName: \"kubernetes.io/projected/f599327d-7596-48d8-ad09-635c4236b746-kube-api-access-x55z6\") pod \"dashboard-metrics-scraper-5f989dc9cf-n2t2l\" (UID: \"f599327d-7596-48d8-ad09-635c4236b746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l"
	Dec 16 05:04:42 old-k8s-version-545075 kubelet[723]: I1216 05:04:42.073782     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv585" podStartSLOduration=1.082209908 podCreationTimestamp="2025-12-16 05:04:38 +0000 UTC" firstStartedPulling="2025-12-16 05:04:38.798925593 +0000 UTC m=+16.984536809" lastFinishedPulling="2025-12-16 05:04:41.790420193 +0000 UTC m=+19.976031416" observedRunningTime="2025-12-16 05:04:42.0733246 +0000 UTC m=+20.258935833" watchObservedRunningTime="2025-12-16 05:04:42.073704515 +0000 UTC m=+20.259315749"
	Dec 16 05:04:45 old-k8s-version-545075 kubelet[723]: I1216 05:04:45.063584     723 scope.go:117] "RemoveContainer" containerID="d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a"
	Dec 16 05:04:46 old-k8s-version-545075 kubelet[723]: I1216 05:04:46.067902     723 scope.go:117] "RemoveContainer" containerID="d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a"
	Dec 16 05:04:46 old-k8s-version-545075 kubelet[723]: I1216 05:04:46.068150     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:04:46 old-k8s-version-545075 kubelet[723]: E1216 05:04:46.068546     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:04:47 old-k8s-version-545075 kubelet[723]: I1216 05:04:47.071272     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:04:47 old-k8s-version-545075 kubelet[723]: E1216 05:04:47.071508     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:04:48 old-k8s-version-545075 kubelet[723]: I1216 05:04:48.766884     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:04:48 old-k8s-version-545075 kubelet[723]: E1216 05:04:48.767200     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:04:56 old-k8s-version-545075 kubelet[723]: I1216 05:04:56.093638     723 scope.go:117] "RemoveContainer" containerID="926327055a53dcd7c24cb4836a69ddf126c05855312a52841c42a80393b3226a"
	Dec 16 05:04:59 old-k8s-version-545075 kubelet[723]: I1216 05:04:59.979806     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:05:00 old-k8s-version-545075 kubelet[723]: I1216 05:05:00.108005     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:05:00 old-k8s-version-545075 kubelet[723]: I1216 05:05:00.108233     723 scope.go:117] "RemoveContainer" containerID="5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7"
	Dec 16 05:05:00 old-k8s-version-545075 kubelet[723]: E1216 05:05:00.108589     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:05:08 old-k8s-version-545075 kubelet[723]: I1216 05:05:08.767293     723 scope.go:117] "RemoveContainer" containerID="5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7"
	Dec 16 05:05:08 old-k8s-version-545075 kubelet[723]: E1216 05:05:08.769078     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:05:16 old-k8s-version-545075 kubelet[723]: I1216 05:05:16.752520     723 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: kubelet.service: Consumed 1.581s CPU time.
	
	
	==> kubernetes-dashboard [1f44c8595e54b12a159c20e93ca46146d14d3cb2b264a6dddf302b7d3be69f68] <==
	2025/12/16 05:04:41 Starting overwatch
	2025/12/16 05:04:41 Using namespace: kubernetes-dashboard
	2025/12/16 05:04:41 Using in-cluster config to connect to apiserver
	2025/12/16 05:04:41 Using secret token for csrf signing
	2025/12/16 05:04:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:04:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:04:41 Successful initial request to the apiserver, version: v1.28.0
	2025/12/16 05:04:41 Generating JWE encryption key
	2025/12/16 05:04:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:04:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:04:42 Initializing JWE encryption key from synchronized object
	2025/12/16 05:04:42 Creating in-cluster Sidecar client
	2025/12/16 05:04:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:04:42 Serving insecurely on HTTP port: 9090
	2025/12/16 05:05:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [926327055a53dcd7c24cb4836a69ddf126c05855312a52841c42a80393b3226a] <==
	I1216 05:04:25.312199       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:04:55.315468       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4] <==
	I1216 05:04:56.146405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:04:56.154381       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:04:56.154414       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 05:05:13.552405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:05:13.552559       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545075_7e8e2828-29d0-45ba-8fe3-878e82b9743b!
	I1216 05:05:13.552569       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c47abbd-d74f-4d30-b214-42c151476c8d", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-545075_7e8e2828-29d0-45ba-8fe3-878e82b9743b became leader
	I1216 05:05:13.652832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545075_7e8e2828-29d0-45ba-8fe3-878e82b9743b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-545075 -n old-k8s-version-545075
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-545075 -n old-k8s-version-545075: exit status 2 (315.074971ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-545075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-545075
helpers_test.go:244: (dbg) docker inspect old-k8s-version-545075:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa",
	        "Created": "2025-12-16T05:02:59.987474092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 275525,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:04:13.654824769Z",
	            "FinishedAt": "2025-12-16T05:04:12.679746004Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/hosts",
	        "LogPath": "/var/lib/docker/containers/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa/670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa-json.log",
	        "Name": "/old-k8s-version-545075",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-545075:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-545075",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "670e802935b6d6a653c8ce7cd91dc4ce48dc6ba0d5c8351f64f60fa44b6674fa",
	                "LowerDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47a4b68bc2b895c2ef8b0b7c3c3ec57c812d958c12d5945e68b5604e0c2bb37a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-545075",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-545075/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-545075",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-545075",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-545075",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bd7017d38931ac59cb777bbac12b61e6f1b96f66635beb43f79dfd01f4c7a219",
	            "SandboxKey": "/var/run/docker/netns/bd7017d38931",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-545075": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "333da06948413c05c3047e735767d3346609d7c3a2cd33e3c019b11d1e8453a4",
	                    "EndpointID": "e2624061267c4b533793c173c69e1bcffa469e9a85818b6f8427f35b5e1c7d16",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f6:ac:ac:ad:33:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-545075",
	                        "670e802935b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075: exit status 2 (305.960462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-545075 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-545075 logs -n 25: (1.065475179s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ delete  │ -p cert-expiration-123545                                                                                                                                                                                                                     │ cert-expiration-123545       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:03 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-545075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │                     │
	│ stop    │ -p old-k8s-version-545075 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:03 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                  │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                               │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                               │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:04:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:04:58.807546  285878 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:04:58.807646  285878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:58.807658  285878 out.go:374] Setting ErrFile to fd 2...
	I1216 05:04:58.807663  285878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:04:58.807870  285878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:04:58.808307  285878 out.go:368] Setting JSON to false
	I1216 05:04:58.809533  285878 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2850,"bootTime":1765858649,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:04:58.809586  285878 start.go:143] virtualization: kvm guest
	I1216 05:04:58.811715  285878 out.go:179] * [embed-certs-127599] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:04:58.813202  285878 notify.go:221] Checking for updates...
	I1216 05:04:58.813273  285878 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:04:58.814671  285878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:04:58.816055  285878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:58.817240  285878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:04:58.818360  285878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:04:58.819590  285878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:04:58.821480  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:04:58.822299  285878 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:04:58.848069  285878 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:04:58.848202  285878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:58.906263  285878 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:04:58.896349391 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:58.906399  285878 docker.go:319] overlay module found
	I1216 05:04:58.908347  285878 out.go:179] * Using the docker driver based on existing profile
	I1216 05:04:58.909504  285878 start.go:309] selected driver: docker
	I1216 05:04:58.909519  285878 start.go:927] validating driver "docker" against &{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:58.909604  285878 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:04:58.910114  285878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:04:58.971198  285878 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:04:58.959512916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:04:58.971557  285878 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:04:58.971608  285878 cni.go:84] Creating CNI manager for ""
	I1216 05:04:58.971676  285878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:58.971782  285878 start.go:353] cluster config:
	{Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:58.974821  285878 out.go:179] * Starting "embed-certs-127599" primary control-plane node in "embed-certs-127599" cluster
	I1216 05:04:58.976420  285878 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:04:58.978154  285878 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:04:58.980043  285878 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:04:58.980116  285878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:04:58.980120  285878 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:04:58.980154  285878 cache.go:65] Caching tarball of preloaded images
	I1216 05:04:58.980280  285878 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:04:58.980291  285878 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:04:58.980419  285878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json ...
	I1216 05:04:59.006359  285878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:04:59.006379  285878 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:04:59.006397  285878 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:04:59.006434  285878 start.go:360] acquireMachinesLock for embed-certs-127599: {Name:mk782536a8f26bd7063e28ead9562fcba9d13ffe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:04:59.006494  285878 start.go:364] duration metric: took 39.093µs to acquireMachinesLock for "embed-certs-127599"
	I1216 05:04:59.006518  285878 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:04:59.006527  285878 fix.go:54] fixHost starting: 
	I1216 05:04:59.006819  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:04:59.030521  285878 fix.go:112] recreateIfNeeded on embed-certs-127599: state=Stopped err=<nil>
	W1216 05:04:59.030575  285878 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:04:58.146615  284202 cli_runner.go:164] Run: docker network inspect no-preload-990631 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:04:58.164702  284202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 05:04:58.168903  284202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:58.179164  284202 kubeadm.go:884] updating cluster {Name:no-preload-990631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:04:58.179295  284202 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:04:58.179337  284202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:04:58.211615  284202 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:04:58.211639  284202 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:04:58.211647  284202 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:04:58.211734  284202 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-990631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:04:58.211797  284202 ssh_runner.go:195] Run: crio config
	I1216 05:04:58.256766  284202 cni.go:84] Creating CNI manager for ""
	I1216 05:04:58.256789  284202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:04:58.256801  284202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:04:58.256822  284202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-990631 NodeName:no-preload-990631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:04:58.256971  284202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-990631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:04:58.257052  284202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:04:58.265699  284202 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:04:58.265763  284202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:04:58.273700  284202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1216 05:04:58.286894  284202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:04:58.300035  284202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1216 05:04:58.313614  284202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:04:58.317210  284202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:04:58.327214  284202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:58.414343  284202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:58.443043  284202 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631 for IP: 192.168.103.2
	I1216 05:04:58.443061  284202 certs.go:195] generating shared ca certs ...
	I1216 05:04:58.443081  284202 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:58.443293  284202 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:04:58.443372  284202 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:04:58.443391  284202 certs.go:257] generating profile certs ...
	I1216 05:04:58.443497  284202 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/client.key
	I1216 05:04:58.443575  284202 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key.44083b8e
	I1216 05:04:58.443635  284202 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key
	I1216 05:04:58.443842  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:04:58.443905  284202 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:04:58.443919  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:04:58.443943  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:04:58.443970  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:04:58.443999  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:04:58.444082  284202 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:04:58.444854  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:04:58.475449  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:04:58.495391  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:04:58.517175  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:04:58.549655  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:04:58.572549  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:04:58.596524  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:04:58.618088  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/no-preload-990631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:04:58.640733  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:04:58.663625  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:04:58.682127  284202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:04:58.700365  284202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:04:58.715543  284202 ssh_runner.go:195] Run: openssl version
	I1216 05:04:58.722348  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.729959  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:04:58.738355  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.742405  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.742457  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:04:58.787292  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:04:58.796645  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.804122  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:04:58.812151  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.816028  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.816079  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:04:58.855153  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:04:58.863425  284202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.873487  284202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:04:58.882811  284202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.887176  284202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.887238  284202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:04:58.926865  284202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:04:58.937699  284202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:04:58.942829  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:04:58.990450  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:04:59.040055  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:04:59.099401  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:04:59.157276  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:04:59.215120  284202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:04:59.258742  284202 kubeadm.go:401] StartCluster: {Name:no-preload-990631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-990631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:04:59.258841  284202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:04:59.258895  284202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:04:59.292160  284202 cri.go:89] found id: "4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5"
	I1216 05:04:59.292185  284202 cri.go:89] found id: "c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637"
	I1216 05:04:59.292192  284202 cri.go:89] found id: "3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5"
	I1216 05:04:59.292196  284202 cri.go:89] found id: "f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9"
	I1216 05:04:59.292201  284202 cri.go:89] found id: ""
	I1216 05:04:59.292247  284202 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:04:59.307274  284202 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:04:59Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:04:59.307344  284202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:04:59.316697  284202 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:04:59.316715  284202 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:04:59.316758  284202 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:04:59.327769  284202 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:04:59.328615  284202 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-990631" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:59.329102  284202 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-990631" cluster setting kubeconfig missing "no-preload-990631" context setting]
	I1216 05:04:59.329859  284202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.331782  284202 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:04:59.341505  284202 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1216 05:04:59.341536  284202 kubeadm.go:602] duration metric: took 24.814841ms to restartPrimaryControlPlane
	I1216 05:04:59.341546  284202 kubeadm.go:403] duration metric: took 82.81371ms to StartCluster
	I1216 05:04:59.341562  284202 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.341627  284202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:04:59.343176  284202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:04:59.343399  284202 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:04:59.343621  284202 config.go:182] Loaded profile config "no-preload-990631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:04:59.343681  284202 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:04:59.343779  284202 addons.go:70] Setting storage-provisioner=true in profile "no-preload-990631"
	I1216 05:04:59.343815  284202 addons.go:239] Setting addon storage-provisioner=true in "no-preload-990631"
	W1216 05:04:59.343827  284202 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:04:59.343872  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.343982  284202 addons.go:70] Setting dashboard=true in profile "no-preload-990631"
	I1216 05:04:59.344000  284202 addons.go:70] Setting default-storageclass=true in profile "no-preload-990631"
	I1216 05:04:59.344031  284202 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-990631"
	I1216 05:04:59.344014  284202 addons.go:239] Setting addon dashboard=true in "no-preload-990631"
	W1216 05:04:59.344156  284202 addons.go:248] addon dashboard should already be in state true
	I1216 05:04:59.344188  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.344358  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.344495  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.344647  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.347229  284202 out.go:179] * Verifying Kubernetes components...
	I1216 05:04:59.349480  284202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:04:59.373084  284202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:04:59.373093  284202 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:04:59.373127  284202 addons.go:239] Setting addon default-storageclass=true in "no-preload-990631"
	W1216 05:04:59.373249  284202 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:04:59.373317  284202 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:04:59.373809  284202 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:04:59.374487  284202 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:59.374504  284202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:04:59.374557  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.376102  284202 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:04:59.377081  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:04:59.377099  284202 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:04:59.377145  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.399402  284202 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:59.399426  284202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:04:59.399475  284202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:04:59.404857  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.417430  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.431357  284202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:04:59.515049  284202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:04:59.526392  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:04:59.531687  284202 node_ready.go:35] waiting up to 6m0s for node "no-preload-990631" to be "Ready" ...
	I1216 05:04:59.533132  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:04:59.533157  284202 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:04:59.538389  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:04:59.549617  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:04:59.549639  284202 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:04:59.569671  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:04:59.569760  284202 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:04:59.593378  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:04:59.593405  284202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:04:59.621948  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:04:59.621972  284202 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:04:59.644345  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:04:59.644448  284202 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:04:59.661945  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:04:59.661975  284202 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:04:59.678222  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:04:59.678252  284202 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:04:59.700122  284202 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:04:59.700148  284202 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:04:59.719777  284202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:00.484236  284202 node_ready.go:49] node "no-preload-990631" is "Ready"
	I1216 05:05:00.484283  284202 node_ready.go:38] duration metric: took 952.560787ms for node "no-preload-990631" to be "Ready" ...
	I1216 05:05:00.484302  284202 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:00.484360  284202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:01.061883  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.535448575s)
	I1216 05:05:01.061947  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.523532435s)
	I1216 05:05:01.062100  284202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.342232728s)
	I1216 05:05:01.062128  284202 api_server.go:72] duration metric: took 1.718698599s to wait for apiserver process to appear ...
	I1216 05:05:01.062138  284202 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:01.062165  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:01.064061  284202 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-990631 addons enable metrics-server
	
	I1216 05:05:01.067391  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:01.067413  284202 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:01.069255  284202 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 05:05:01.070615  284202 addons.go:530] duration metric: took 1.726932128s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:05:01.563168  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:01.567835  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:01.567870  284202 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:02.062462  284202 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1216 05:05:02.066872  284202 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1216 05:05:02.068217  284202 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:05:02.068239  284202 api_server.go:131] duration metric: took 1.006094275s to wait for apiserver health ...
	I1216 05:05:02.068247  284202 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:02.072657  284202 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:02.072700  284202 system_pods.go:61] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:02.072712  284202 system_pods.go:61] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:02.072720  284202 system_pods.go:61] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:05:02.072735  284202 system_pods.go:61] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:02.072793  284202 system_pods.go:61] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:02.072801  284202 system_pods.go:61] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:05:02.072815  284202 system_pods.go:61] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:02.072820  284202 system_pods.go:61] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Running
	I1216 05:05:02.072827  284202 system_pods.go:74] duration metric: took 4.575509ms to wait for pod list to return data ...
	I1216 05:05:02.072835  284202 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:02.075526  284202 default_sa.go:45] found service account: "default"
	I1216 05:05:02.075545  284202 default_sa.go:55] duration metric: took 2.704707ms for default service account to be created ...
	I1216 05:05:02.075552  284202 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:02.078045  284202 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:02.078069  284202 system_pods.go:89] "coredns-7d764666f9-7l2ds" [a2f7a264-de0f-4597-b46f-e25a6b08115f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:02.078076  284202 system_pods.go:89] "etcd-no-preload-990631" [1bfbef7b-de5b-40a0-a49c-9ff1d9abe151] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:02.078081  284202 system_pods.go:89] "kindnet-p27s9" [cf126848-10d8-452c-92f2-fb35ee37b496] Running
	I1216 05:05:02.078087  284202 system_pods.go:89] "kube-apiserver-no-preload-990631" [be2ef8fc-08fd-46c2-b29f-c494e973dcc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:02.078098  284202 system_pods.go:89] "kube-controller-manager-no-preload-990631" [8c3f74e5-d8e8-43e5-8813-9e58be50a299] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:02.078105  284202 system_pods.go:89] "kube-proxy-2bjbr" [4fc43ab4-736b-46b5-ba4d-3cfe0997a3e0] Running
	I1216 05:05:02.078113  284202 system_pods.go:89] "kube-scheduler-no-preload-990631" [a4ba2406-2460-4bf8-b67d-0fd4ee10472d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:02.078120  284202 system_pods.go:89] "storage-provisioner" [ec3f5f16-7fd9-480f-8793-d015baf3ee90] Running
	I1216 05:05:02.078129  284202 system_pods.go:126] duration metric: took 2.571444ms to wait for k8s-apps to be running ...
	I1216 05:05:02.078140  284202 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:02.078198  284202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:02.090955  284202 system_svc.go:56] duration metric: took 12.80557ms WaitForService to wait for kubelet
	I1216 05:05:02.090982  284202 kubeadm.go:587] duration metric: took 2.74755423s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:02.090999  284202 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:02.093755  284202 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:02.093776  284202 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:02.093791  284202 node_conditions.go:105] duration metric: took 2.787822ms to run NodePressure ...
	I1216 05:05:02.093801  284202 start.go:242] waiting for startup goroutines ...
	I1216 05:05:02.093808  284202 start.go:247] waiting for cluster config update ...
	I1216 05:05:02.093821  284202 start.go:256] writing updated cluster config ...
	I1216 05:05:02.094117  284202 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:02.097851  284202 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:02.100920  284202 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:00.516383  275199 pod_ready.go:104] pod "coredns-5dd5756b68-fplzg" is not "Ready", error: <nil>
	I1216 05:05:03.014998  275199 pod_ready.go:94] pod "coredns-5dd5756b68-fplzg" is "Ready"
	I1216 05:05:03.015051  275199 pod_ready.go:86] duration metric: took 36.505880263s for pod "coredns-5dd5756b68-fplzg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.018010  275199 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.022411  275199 pod_ready.go:94] pod "etcd-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.022432  275199 pod_ready.go:86] duration metric: took 4.39181ms for pod "etcd-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.025267  275199 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.029715  275199 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.029741  275199 pod_ready.go:86] duration metric: took 4.454401ms for pod "kube-apiserver-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.032803  275199 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.212694  275199 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-545075" is "Ready"
	I1216 05:05:03.212718  275199 pod_ready.go:86] duration metric: took 179.891662ms for pod "kube-controller-manager-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:04:59.034761  285878 out.go:252] * Restarting existing docker container for "embed-certs-127599" ...
	I1216 05:04:59.034846  285878 cli_runner.go:164] Run: docker start embed-certs-127599
	I1216 05:04:59.342519  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:04:59.369236  285878 kic.go:430] container "embed-certs-127599" state is running.
	I1216 05:04:59.369666  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:04:59.398479  285878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/config.json ...
	I1216 05:04:59.398806  285878 machine.go:94] provisionDockerMachine start ...
	I1216 05:04:59.398910  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:04:59.434387  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:04:59.434715  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:04:59.434744  285878 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:04:59.435373  285878 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57022->127.0.0.1:33091: read: connection reset by peer
	I1216 05:05:02.565577  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-127599
	
	I1216 05:05:02.565604  285878 ubuntu.go:182] provisioning hostname "embed-certs-127599"
	I1216 05:05:02.565682  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.585360  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:02.585581  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:02.585595  285878 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-127599 && echo "embed-certs-127599" | sudo tee /etc/hostname
	I1216 05:05:02.721382  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-127599
	
	I1216 05:05:02.721466  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.739517  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:02.739721  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:02.739737  285878 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-127599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-127599/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-127599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:02.866626  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:02.866657  285878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:02.866676  285878 ubuntu.go:190] setting up certificates
	I1216 05:05:02.866687  285878 provision.go:84] configureAuth start
	I1216 05:05:02.866760  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:05:02.884961  285878 provision.go:143] copyHostCerts
	I1216 05:05:02.885047  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:02.885066  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:02.885138  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:02.885238  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:02.885250  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:02.885279  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:02.885346  285878 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:02.885354  285878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:02.885376  285878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:02.885439  285878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.embed-certs-127599 san=[127.0.0.1 192.168.94.2 embed-certs-127599 localhost minikube]
	I1216 05:05:02.905981  285878 provision.go:177] copyRemoteCerts
	I1216 05:05:02.906048  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:02.906098  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:02.925287  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.020273  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:03.041871  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:05:03.060678  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:05:03.078064  285878 provision.go:87] duration metric: took 211.357317ms to configureAuth
	I1216 05:05:03.078092  285878 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:03.078242  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:03.078323  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.096032  285878 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:03.096295  285878 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1216 05:05:03.096316  285878 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:03.415229  285878 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:03.415249  285878 machine.go:97] duration metric: took 4.016422304s to provisionDockerMachine
	I1216 05:05:03.415264  285878 start.go:293] postStartSetup for "embed-certs-127599" (driver="docker")
	I1216 05:05:03.415277  285878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:03.415379  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:03.415436  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.434727  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.526138  285878 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:03.529645  285878 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:03.529676  285878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:03.529688  285878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:03.529745  285878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:03.529814  285878 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:03.529894  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:03.537519  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:03.555384  285878 start.go:296] duration metric: took 140.103941ms for postStartSetup
	I1216 05:05:03.555454  285878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:03.555498  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.574541  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.671209  285878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:03.675704  285878 fix.go:56] duration metric: took 4.669173594s for fixHost
	I1216 05:05:03.675732  285878 start.go:83] releasing machines lock for "embed-certs-127599", held for 4.669223523s
	I1216 05:05:03.675807  285878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-127599
	I1216 05:05:03.693006  285878 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:03.693089  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.693130  285878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:03.693193  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:03.711276  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.715049  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:03.415146  275199 pod_ready.go:83] waiting for pod "kube-proxy-7pldc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:03.813274  275199 pod_ready.go:94] pod "kube-proxy-7pldc" is "Ready"
	I1216 05:05:03.813309  275199 pod_ready.go:86] duration metric: took 398.130225ms for pod "kube-proxy-7pldc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.016765  275199 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.412999  275199 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-545075" is "Ready"
	I1216 05:05:04.413055  275199 pod_ready.go:86] duration metric: took 396.261871ms for pod "kube-scheduler-old-k8s-version-545075" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:04.413072  275199 pod_ready.go:40] duration metric: took 37.908213005s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:04.460171  275199 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1216 05:05:04.461837  275199 out.go:203] 
	W1216 05:05:04.462951  275199 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1216 05:05:04.464050  275199 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1216 05:05:04.465161  275199 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-545075" cluster and "default" namespace by default
	I1216 05:05:03.870367  285878 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:03.877245  285878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:03.911801  285878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:03.916421  285878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:03.916499  285878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:03.924729  285878 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:05:03.924754  285878 start.go:496] detecting cgroup driver to use...
	I1216 05:05:03.924789  285878 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:03.924826  285878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:03.938355  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:03.950687  285878 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:03.950755  285878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:03.966322  285878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:03.978654  285878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:04.067259  285878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:04.157828  285878 docker.go:234] disabling docker service ...
	I1216 05:05:04.157924  285878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:04.172457  285878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:04.185242  285878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:04.268722  285878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:04.352800  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:04.369946  285878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:04.389991  285878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:04.390111  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.402275  285878 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:04.402346  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.413804  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.423233  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.433098  285878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:04.441987  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.452342  285878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.461567  285878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:04.470671  285878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:04.478121  285878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:04.485428  285878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:04.584484  285878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:04.714916  285878 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:04.715042  285878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:04.719134  285878 start.go:564] Will wait 60s for crictl version
	I1216 05:05:04.719175  285878 ssh_runner.go:195] Run: which crictl
	I1216 05:05:04.722694  285878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:04.750409  285878 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:04.750485  285878 ssh_runner.go:195] Run: crio --version
	I1216 05:05:04.786143  285878 ssh_runner.go:195] Run: crio --version
	I1216 05:05:04.822457  285878 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:05:04.823609  285878 cli_runner.go:164] Run: docker network inspect embed-certs-127599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:04.842823  285878 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:04.847000  285878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:04.857344  285878 kubeadm.go:884] updating cluster {Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:04.857466  285878 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:04.857593  285878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:04.895769  285878 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:04.895807  285878 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:04.895868  285878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:04.930726  285878 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:04.930758  285878 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:04.930767  285878 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1216 05:05:04.930907  285878 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-127599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:04.931008  285878 ssh_runner.go:195] Run: crio config
	I1216 05:05:04.997735  285878 cni.go:84] Creating CNI manager for ""
	I1216 05:05:04.997762  285878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:04.997780  285878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:05:04.997819  285878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-127599 NodeName:embed-certs-127599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:04.997960  285878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-127599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:04.998048  285878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:05:05.008874  285878 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:05.008944  285878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:05.020305  285878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1216 05:05:05.037984  285878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:05:05.055161  285878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1216 05:05:05.074450  285878 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:05.080583  285878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:05.094342  285878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:05.222260  285878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:05.249350  285878 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599 for IP: 192.168.94.2
	I1216 05:05:05.249372  285878 certs.go:195] generating shared ca certs ...
	I1216 05:05:05.249392  285878 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:05.249576  285878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:05.249640  285878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:05.249652  285878 certs.go:257] generating profile certs ...
	I1216 05:05:05.249786  285878 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/client.key
	I1216 05:05:05.249879  285878 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/apiserver.key.8323bd4e
	I1216 05:05:05.249938  285878 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/proxy-client.key
	I1216 05:05:05.250125  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:05.250179  285878 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:05.250196  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:05.250239  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:05.250282  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:05.250325  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:05.250394  285878 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:05.251932  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:05.279924  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:05.308114  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:05.340146  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:05.375339  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 05:05:05.400675  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:05:05.426336  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:05.456123  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/embed-certs-127599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:05:05.485328  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:05.514764  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:05.538413  285878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:05.562818  285878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:05.581530  285878 ssh_runner.go:195] Run: openssl version
	I1216 05:05:05.590154  285878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.601115  285878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:05.613737  285878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.619334  285878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.619387  285878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:05.675958  285878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:05.685906  285878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.696414  285878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:05.708508  285878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.713636  285878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.713711  285878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:05.769068  285878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:05.779303  285878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.789116  285878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:05.799879  285878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.805200  285878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.805252  285878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:05.865108  285878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:05.876011  285878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:05.880721  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:05:05.952652  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:05:06.022258  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:05:06.085243  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:05:06.147255  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:05:06.204474  285878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:05:06.268071  285878 kubeadm.go:401] StartCluster: {Name:embed-certs-127599 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-127599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:06.268176  285878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:06.268232  285878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:06.302937  285878 cri.go:89] found id: "5aba47586bca5359899a12179f858df845d0a3d7ff35738262380c0dc7df2808"
	I1216 05:05:06.302965  285878 cri.go:89] found id: "9470750a66d86139939d4b618522b12a7fec97282112773930ad51d8f35b0afe"
	I1216 05:05:06.302971  285878 cri.go:89] found id: "aaeef0aba55ec8b43be948ea78142e157863fc7464359991825255d4956d31c7"
	I1216 05:05:06.302977  285878 cri.go:89] found id: "40939f377b2ea50bcacad648ad4f8c9351d77ec9c17f14735f75e1fce438fe46"
	I1216 05:05:06.302982  285878 cri.go:89] found id: ""
	I1216 05:05:06.303043  285878 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:05:06.317237  285878 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:06Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:06.317304  285878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:06.326777  285878 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:05:06.326798  285878 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:05:06.326835  285878 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:05:06.336136  285878 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:05:06.336879  285878 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-127599" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:06.337334  285878 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-127599" cluster setting kubeconfig missing "embed-certs-127599" context setting]
	I1216 05:05:06.337902  285878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:06.339927  285878 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:05:06.350230  285878 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1216 05:05:06.350271  285878 kubeadm.go:602] duration metric: took 23.46779ms to restartPrimaryControlPlane
	I1216 05:05:06.350281  285878 kubeadm.go:403] duration metric: took 82.219997ms to StartCluster
	I1216 05:05:06.350299  285878 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:06.350363  285878 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:06.352595  285878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:06.352830  285878 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:06.353025  285878 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:06.353201  285878 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-127599"
	I1216 05:05:06.353225  285878 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-127599"
	W1216 05:05:06.353234  285878 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:05:06.353242  285878 addons.go:70] Setting dashboard=true in profile "embed-certs-127599"
	I1216 05:05:06.353273  285878 addons.go:239] Setting addon dashboard=true in "embed-certs-127599"
	I1216 05:05:06.353282  285878 addons.go:70] Setting default-storageclass=true in profile "embed-certs-127599"
	W1216 05:05:06.353287  285878 addons.go:248] addon dashboard should already be in state true
	I1216 05:05:06.353299  285878 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-127599"
	I1216 05:05:06.353328  285878 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:06.353349  285878 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:06.353273  285878 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:06.353716  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.353793  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.353983  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.358089  285878 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:06.361102  285878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:06.382904  285878 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:06.384054  285878 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:05:06.384079  285878 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:06.384096  285878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:06.384165  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:06.385061  285878 addons.go:239] Setting addon default-storageclass=true in "embed-certs-127599"
	W1216 05:05:06.385092  285878 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:05:06.385123  285878 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:06.385865  285878 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:06.386297  285878 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1216 05:05:04.107167  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:06.109608  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:06.387112  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:05:06.387166  285878 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:05:06.387263  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:06.415104  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:06.418043  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:06.434363  285878 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:06.434389  285878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:06.434448  285878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:06.467298  285878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:06.567848  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:05:06.567877  285878 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:05:06.568462  285878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:06.585004  285878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:06.588783  285878 node_ready.go:35] waiting up to 6m0s for node "embed-certs-127599" to be "Ready" ...
	I1216 05:05:06.596291  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:05:06.596320  285878 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:05:06.619562  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:05:06.619588  285878 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:05:06.619974  285878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:06.646359  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:05:06.646387  285878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:05:06.673497  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:05:06.673530  285878 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:05:06.698759  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:05:06.698796  285878 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:05:06.722359  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:05:06.722383  285878 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:05:06.739863  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:05:06.739889  285878 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:05:06.760730  285878 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:06.760840  285878 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:05:06.780963  285878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:07.960700  285878 node_ready.go:49] node "embed-certs-127599" is "Ready"
	I1216 05:05:07.960749  285878 node_ready.go:38] duration metric: took 1.371929833s for node "embed-certs-127599" to be "Ready" ...
	I1216 05:05:07.960766  285878 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:07.960923  285878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:08.754763  285878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.169687979s)
	I1216 05:05:08.754838  285878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.134840002s)
	I1216 05:05:08.755278  285878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.974260218s)
	I1216 05:05:08.755608  285878 api_server.go:72] duration metric: took 2.402740073s to wait for apiserver process to appear ...
	I1216 05:05:08.755623  285878 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:08.755642  285878 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 05:05:08.757738  285878 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-127599 addons enable metrics-server
	
	I1216 05:05:08.763784  285878 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:08.763817  285878 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:08.775569  285878 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 05:05:08.776663  285878 addons.go:530] duration metric: took 2.423654258s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1216 05:05:08.607566  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:11.105819  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:09.256465  285878 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1216 05:05:09.261737  285878 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1216 05:05:09.262830  285878 api_server.go:141] control plane version: v1.34.2
	I1216 05:05:09.262856  285878 api_server.go:131] duration metric: took 507.225223ms to wait for apiserver health ...
	I1216 05:05:09.262867  285878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:09.266811  285878 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:09.266863  285878 system_pods.go:61] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:09.266888  285878 system_pods.go:61] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:09.266899  285878 system_pods.go:61] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:09.266913  285878 system_pods.go:61] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:09.266923  285878 system_pods.go:61] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:09.266935  285878 system_pods.go:61] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:09.266947  285878 system_pods.go:61] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:09.266959  285878 system_pods.go:61] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:09.266973  285878 system_pods.go:74] duration metric: took 4.098506ms to wait for pod list to return data ...
	I1216 05:05:09.266989  285878 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:09.269276  285878 default_sa.go:45] found service account: "default"
	I1216 05:05:09.269293  285878 default_sa.go:55] duration metric: took 2.297068ms for default service account to be created ...
	I1216 05:05:09.269303  285878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:09.271995  285878 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:09.272057  285878 system_pods.go:89] "coredns-66bc5c9577-47h64" [fe03346d-aa19-42e5-9352-7aed24d60503] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:09.272077  285878 system_pods.go:89] "etcd-embed-certs-127599" [cc37d720-33c6-47a1-a2bd-667e4e10071d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:09.272106  285878 system_pods.go:89] "kindnet-nqzjf" [740c8d9d-3a80-45a8-8916-67d54515815e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:09.272132  285878 system_pods.go:89] "kube-apiserver-embed-certs-127599" [a53fa7c8-a705-4f2a-8d7d-6961511ae8f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:09.272148  285878 system_pods.go:89] "kube-controller-manager-embed-certs-127599" [f3602199-6382-4f06-b308-c02ba49305f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:09.272159  285878 system_pods.go:89] "kube-proxy-j79cw" [91e8a52c-f22e-4d40-871b-e6a5daeb9e5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:09.272169  285878 system_pods.go:89] "kube-scheduler-embed-certs-127599" [a7da3214-6b92-4858-833f-682f1d65f8a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:09.272179  285878 system_pods.go:89] "storage-provisioner" [ae7b55f0-073f-4442-bc1a-e168246b4bd6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:09.272189  285878 system_pods.go:126] duration metric: took 2.878527ms to wait for k8s-apps to be running ...
	I1216 05:05:09.272200  285878 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:09.272253  285878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:09.289320  285878 system_svc.go:56] duration metric: took 17.114578ms WaitForService to wait for kubelet
	I1216 05:05:09.289346  285878 kubeadm.go:587] duration metric: took 2.93648066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:09.289361  285878 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:09.293536  285878 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:09.293575  285878 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:09.293588  285878 node_conditions.go:105] duration metric: took 4.222572ms to run NodePressure ...
	I1216 05:05:09.293599  285878 start.go:242] waiting for startup goroutines ...
	I1216 05:05:09.293606  285878 start.go:247] waiting for cluster config update ...
	I1216 05:05:09.293617  285878 start.go:256] writing updated cluster config ...
	I1216 05:05:09.293930  285878 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:09.298463  285878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:09.302479  285878 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:11.307286  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:13.308848  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:13.106844  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:15.106911  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:17.107424  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:15.808872  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:18.307736  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 16 05:04:45 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:45.103340055Z" level=info msg="Started container" PID=1756 containerID=1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper id=4953d307-798f-475c-8957-42dd2e9c400c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bb54ffac6141a4e9aab3b7127ff5e816f4aa3882dbd089bda90c8e44917036b
	Dec 16 05:04:46 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:46.069247685Z" level=info msg="Removing container: d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a" id=a0b995ba-0ecf-47cc-8256-3abd49a54944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:04:46 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:46.078766739Z" level=info msg="Removed container d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=a0b995ba-0ecf-47cc-8256-3abd49a54944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.093985821Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=245e57c6-a598-42bc-8f12-9d9b79a7473c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.094751503Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=72a81a74-0cd3-4fab-a291-f1f1da7ec83d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.095618426Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dda621dc-3edc-47d5-9556-efb618f2c0a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.095741974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100072394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100260381Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5a4cf9849c45739ac5962ab228597069565903c1e85416d0824f8bc75fa0c468/merged/etc/passwd: no such file or directory"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100293309Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5a4cf9849c45739ac5962ab228597069565903c1e85416d0824f8bc75fa0c468/merged/etc/group: no such file or directory"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.100572313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.131153781Z" level=info msg="Created container d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4: kube-system/storage-provisioner/storage-provisioner" id=dda621dc-3edc-47d5-9556-efb618f2c0a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.131831479Z" level=info msg="Starting container: d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4" id=4838c256-22fd-4504-84cd-b19ffe8b7cc2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:04:56 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:56.133835911Z" level=info msg="Started container" PID=1771 containerID=d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4 description=kube-system/storage-provisioner/storage-provisioner id=4838c256-22fd-4504-84cd-b19ffe8b7cc2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cab3e68bf0fb2a70735abde6d4ecd368c85d337f9a4addb30709fa988b5c8b3
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.98050927Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=502413d3-586d-4b49-8666-39c504cabd98 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.981496469Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8af85ca2-b954-4c60-8804-c446fd1079cb name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.98258164Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=a6fe91d4-be4b-489b-9595-4f0de70555e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.982709727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.988966811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:04:59 old-k8s-version-545075 crio[567]: time="2025-12-16T05:04:59.989646108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.020922925Z" level=info msg="Created container 5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=a6fe91d4-be4b-489b-9595-4f0de70555e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.021611188Z" level=info msg="Starting container: 5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7" id=6847f6ad-33d7-4dce-a5ab-663c7af54ee4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.023560499Z" level=info msg="Started container" PID=1787 containerID=5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper id=6847f6ad-33d7-4dce-a5ab-663c7af54ee4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bb54ffac6141a4e9aab3b7127ff5e816f4aa3882dbd089bda90c8e44917036b
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.109222132Z" level=info msg="Removing container: 1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32" id=1caec7cc-620b-41bc-8542-b75c188bb1e2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:00 old-k8s-version-545075 crio[567]: time="2025-12-16T05:05:00.118619161Z" level=info msg="Removed container 1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l/dashboard-metrics-scraper" id=1caec7cc-620b-41bc-8542-b75c188bb1e2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5defdfa105d9d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   5bb54ffac6141       dashboard-metrics-scraper-5f989dc9cf-n2t2l       kubernetes-dashboard
	d3f4510bb12b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   1cab3e68bf0fb       storage-provisioner                              kube-system
	1f44c8595e54b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   30afaaff325cc       kubernetes-dashboard-8694d4445c-pv585            kubernetes-dashboard
	0ab9aa1f7ad82       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   0f03f0b8fe809       busybox                                          default
	76ae5f75a2c9c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   e6cc7098dd604       coredns-5dd5756b68-fplzg                         kube-system
	a6c3f822d7671       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   8c6e3f828cfab       kindnet-r74zw                                    kube-system
	ac47c1b8c4f01       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   0785751741b1b       kube-proxy-7pldc                                 kube-system
	926327055a53d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   1cab3e68bf0fb       storage-provisioner                              kube-system
	01c4e82517b36       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   5eec7d4fd18fa       kube-controller-manager-old-k8s-version-545075   kube-system
	5aeeae98995f1       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   1d1533d086273       kube-apiserver-old-k8s-version-545075            kube-system
	31f7764ec0af5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   9ce25643ad352       etcd-old-k8s-version-545075                      kube-system
	af4a752092a6d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   dbc5090e92c7f       kube-scheduler-old-k8s-version-545075            kube-system
	
	
	==> coredns [76ae5f75a2c9c80d756790c824b41bf144abd8148eaeb535a627919c1efa7675] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58788 - 3741 "HINFO IN 4922608715032854283.1290291217694517511. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014974135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-545075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-545075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=old-k8s-version-545075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_03_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:03:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-545075
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:05:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:04:55 +0000   Tue, 16 Dec 2025 05:03:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-545075
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                37fbaf4b-0aa1-43d5-a5d3-708d61d6a5be
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-fplzg                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-old-k8s-version-545075                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-r74zw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-545075             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-545075    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-7pldc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-545075             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-n2t2l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pv585             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-545075 event: Registered Node old-k8s-version-545075 in Controller
	  Normal  NodeReady                99s                    kubelet          Node old-k8s-version-545075 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)      kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)      kubelet          Node old-k8s-version-545075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)      kubelet          Node old-k8s-version-545075 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-545075 event: Registered Node old-k8s-version-545075 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [31f7764ec0af58c2787749a48d903be38a43b0773a2a9b369902c7942218dc65] <==
	{"level":"info","ts":"2025-12-16T05:04:22.552161Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-16T05:04:22.552172Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-16T05:04:22.552523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-16T05:04:22.552616Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-16T05:04:22.552824Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:04:22.552879Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-16T05:04:22.555858Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-16T05:04:22.556132Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-16T05:04:22.556166Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-16T05:04:22.556285Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T05:04:22.556295Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T05:04:23.844472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-16T05:04:23.844514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-16T05:04:23.844563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-16T05:04:23.84458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.84459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.844598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.844605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-16T05:04:23.846354Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-545075 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-16T05:04:23.846361Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T05:04:23.84639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-16T05:04:23.846602Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-16T05:04:23.846635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-16T05:04:23.847591Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-16T05:04:23.847604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 05:05:21 up 47 min,  0 user,  load average: 3.72, 2.77, 1.94
	Linux old-k8s-version-545075 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6c3f822d7671ca410f51914f238e079a6de421c2cd37d8725a605390702b0ff] <==
	I1216 05:04:25.475497       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:04:25.476156       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 05:04:25.476555       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:04:25.476578       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:04:25.476600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:04:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:04:25.775169       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:04:25.775194       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:04:25.870295       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:04:25.871356       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:04:26.271609       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:04:26.271642       1 metrics.go:72] Registering metrics
	I1216 05:04:26.271717       1 controller.go:711] "Syncing nftables rules"
	I1216 05:04:35.776059       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:04:35.776117       1 main.go:301] handling current node
	I1216 05:04:45.775714       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:04:45.775774       1 main.go:301] handling current node
	I1216 05:04:55.776001       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:04:55.776065       1 main.go:301] handling current node
	I1216 05:05:05.776108       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:05:05.776217       1 main.go:301] handling current node
	I1216 05:05:15.781151       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 05:05:15.781183       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5aeeae98995f110f455b137eea822f077ed8722bcfac6f2d52cb7baa211a4f13] <==
	I1216 05:04:24.854617       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1216 05:04:24.924259       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:04:24.954199       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1216 05:04:24.954230       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 05:04:24.954444       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1216 05:04:24.954253       1 shared_informer.go:318] Caches are synced for configmaps
	I1216 05:04:24.954251       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1216 05:04:24.954539       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1216 05:04:24.954309       1 aggregator.go:166] initial CRD sync complete...
	I1216 05:04:24.954653       1 autoregister_controller.go:141] Starting autoregister controller
	I1216 05:04:24.954659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:04:24.954667       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:04:24.954341       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:04:25.020456       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1216 05:04:25.846740       1 controller.go:624] quota admission added evaluator for: namespaces
	I1216 05:04:25.856277       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:04:25.879396       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1216 05:04:25.896128       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:04:25.904064       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:04:25.911165       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1216 05:04:25.946813       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.150.129"}
	I1216 05:04:25.962002       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.48.32"}
	I1216 05:04:38.146271       1 controller.go:624] quota admission added evaluator for: endpoints
	I1216 05:04:38.347624       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:04:38.446689       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [01c4e82517b36d1f66b8f2d1adb4883fc3cc6a22d531ab5783d092744f69a8d2] <==
	I1216 05:04:38.404050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.048µs"
	I1216 05:04:38.450438       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1216 05:04:38.450770       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1216 05:04:38.458295       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-n2t2l"
	I1216 05:04:38.458328       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-pv585"
	I1216 05:04:38.458829       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 05:04:38.464685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="14.707927ms"
	I1216 05:04:38.467782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.606352ms"
	I1216 05:04:38.473202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="8.471905ms"
	I1216 05:04:38.473286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.153µs"
	I1216 05:04:38.477003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.857µs"
	I1216 05:04:38.477414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.569893ms"
	I1216 05:04:38.477486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.419µs"
	I1216 05:04:38.485532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.396µs"
	I1216 05:04:38.516230       1 shared_informer.go:318] Caches are synced for garbage collector
	I1216 05:04:38.516265       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1216 05:04:42.081646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.294011ms"
	I1216 05:04:42.084625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.928µs"
	I1216 05:04:45.073883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.23µs"
	I1216 05:04:46.078186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.056µs"
	I1216 05:04:47.081152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.173µs"
	I1216 05:05:00.119549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.79µs"
	I1216 05:05:02.830758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.349493ms"
	I1216 05:05:02.830855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.788µs"
	I1216 05:05:08.781046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.897µs"
	
	
	==> kube-proxy [ac47c1b8c4f01cb1fbcb3874ad2386605c553ce37f62c41a839967cc9e9f24d9] <==
	I1216 05:04:25.365812       1 server_others.go:69] "Using iptables proxy"
	I1216 05:04:25.379770       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1216 05:04:25.401079       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:04:25.405947       1 server_others.go:152] "Using iptables Proxier"
	I1216 05:04:25.406065       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1216 05:04:25.406105       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1216 05:04:25.406180       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1216 05:04:25.406698       1 server.go:846] "Version info" version="v1.28.0"
	I1216 05:04:25.406762       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:04:25.408702       1 config.go:97] "Starting endpoint slice config controller"
	I1216 05:04:25.408747       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1216 05:04:25.411209       1 config.go:315] "Starting node config controller"
	I1216 05:04:25.411225       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1216 05:04:25.409336       1 config.go:188] "Starting service config controller"
	I1216 05:04:25.411780       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1216 05:04:25.509707       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1216 05:04:25.512879       1 shared_informer.go:318] Caches are synced for service config
	I1216 05:04:25.513468       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [af4a752092a6dfab68baa25810fa6bf8af6321385deaa1ca39c71f745e63993c] <==
	I1216 05:04:23.087249       1 serving.go:348] Generated self-signed cert in-memory
	W1216 05:04:24.879876       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:04:24.882081       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:04:24.882124       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:04:24.882135       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:04:24.908369       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1216 05:04:24.908399       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:04:24.909990       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:04:24.910030       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 05:04:24.912035       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1216 05:04:24.912093       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1216 05:04:24.914459       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E1216 05:04:24.914503       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1216 05:04:24.917525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 05:04:24.917919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1216 05:04:24.917667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 05:04:24.918350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1216 05:04:24.917794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 05:04:24.918492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1216 05:04:24.917863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 05:04:24.918568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1216 05:04:24.922219       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 05:04:24.922558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1216 05:04:26.110679       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589636     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3222436e-fc8d-4cce-8235-3c3b4766d3db-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pv585\" (UID: \"3222436e-fc8d-4cce-8235-3c3b4766d3db\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv585"
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589697     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs86t\" (UniqueName: \"kubernetes.io/projected/3222436e-fc8d-4cce-8235-3c3b4766d3db-kube-api-access-qs86t\") pod \"kubernetes-dashboard-8694d4445c-pv585\" (UID: \"3222436e-fc8d-4cce-8235-3c3b4766d3db\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv585"
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589731     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f599327d-7596-48d8-ad09-635c4236b746-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-n2t2l\" (UID: \"f599327d-7596-48d8-ad09-635c4236b746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l"
	Dec 16 05:04:38 old-k8s-version-545075 kubelet[723]: I1216 05:04:38.589833     723 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55z6\" (UniqueName: \"kubernetes.io/projected/f599327d-7596-48d8-ad09-635c4236b746-kube-api-access-x55z6\") pod \"dashboard-metrics-scraper-5f989dc9cf-n2t2l\" (UID: \"f599327d-7596-48d8-ad09-635c4236b746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l"
	Dec 16 05:04:42 old-k8s-version-545075 kubelet[723]: I1216 05:04:42.073782     723 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv585" podStartSLOduration=1.082209908 podCreationTimestamp="2025-12-16 05:04:38 +0000 UTC" firstStartedPulling="2025-12-16 05:04:38.798925593 +0000 UTC m=+16.984536809" lastFinishedPulling="2025-12-16 05:04:41.790420193 +0000 UTC m=+19.976031416" observedRunningTime="2025-12-16 05:04:42.0733246 +0000 UTC m=+20.258935833" watchObservedRunningTime="2025-12-16 05:04:42.073704515 +0000 UTC m=+20.259315749"
	Dec 16 05:04:45 old-k8s-version-545075 kubelet[723]: I1216 05:04:45.063584     723 scope.go:117] "RemoveContainer" containerID="d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a"
	Dec 16 05:04:46 old-k8s-version-545075 kubelet[723]: I1216 05:04:46.067902     723 scope.go:117] "RemoveContainer" containerID="d79585d36d5a9508dc022088c8249763913f6f7e7d0455f1f8c134bfcd73968a"
	Dec 16 05:04:46 old-k8s-version-545075 kubelet[723]: I1216 05:04:46.068150     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:04:46 old-k8s-version-545075 kubelet[723]: E1216 05:04:46.068546     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:04:47 old-k8s-version-545075 kubelet[723]: I1216 05:04:47.071272     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:04:47 old-k8s-version-545075 kubelet[723]: E1216 05:04:47.071508     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:04:48 old-k8s-version-545075 kubelet[723]: I1216 05:04:48.766884     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:04:48 old-k8s-version-545075 kubelet[723]: E1216 05:04:48.767200     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:04:56 old-k8s-version-545075 kubelet[723]: I1216 05:04:56.093638     723 scope.go:117] "RemoveContainer" containerID="926327055a53dcd7c24cb4836a69ddf126c05855312a52841c42a80393b3226a"
	Dec 16 05:04:59 old-k8s-version-545075 kubelet[723]: I1216 05:04:59.979806     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:05:00 old-k8s-version-545075 kubelet[723]: I1216 05:05:00.108005     723 scope.go:117] "RemoveContainer" containerID="1c8c1e817b4e6288d2523955ebed86287619e65c38ac0758e9f613682d128d32"
	Dec 16 05:05:00 old-k8s-version-545075 kubelet[723]: I1216 05:05:00.108233     723 scope.go:117] "RemoveContainer" containerID="5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7"
	Dec 16 05:05:00 old-k8s-version-545075 kubelet[723]: E1216 05:05:00.108589     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:05:08 old-k8s-version-545075 kubelet[723]: I1216 05:05:08.767293     723 scope.go:117] "RemoveContainer" containerID="5defdfa105d9db22f64ef8374ab768dc0f2e2845c4aab82225cf59fdae1b55c7"
	Dec 16 05:05:08 old-k8s-version-545075 kubelet[723]: E1216 05:05:08.769078     723 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-n2t2l_kubernetes-dashboard(f599327d-7596-48d8-ad09-635c4236b746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-n2t2l" podUID="f599327d-7596-48d8-ad09-635c4236b746"
	Dec 16 05:05:16 old-k8s-version-545075 kubelet[723]: I1216 05:05:16.752520     723 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:05:16 old-k8s-version-545075 systemd[1]: kubelet.service: Consumed 1.581s CPU time.
	
	
	==> kubernetes-dashboard [1f44c8595e54b12a159c20e93ca46146d14d3cb2b264a6dddf302b7d3be69f68] <==
	2025/12/16 05:04:41 Using namespace: kubernetes-dashboard
	2025/12/16 05:04:41 Using in-cluster config to connect to apiserver
	2025/12/16 05:04:41 Using secret token for csrf signing
	2025/12/16 05:04:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:04:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:04:41 Successful initial request to the apiserver, version: v1.28.0
	2025/12/16 05:04:41 Generating JWE encryption key
	2025/12/16 05:04:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:04:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:04:42 Initializing JWE encryption key from synchronized object
	2025/12/16 05:04:42 Creating in-cluster Sidecar client
	2025/12/16 05:04:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:04:42 Serving insecurely on HTTP port: 9090
	2025/12/16 05:05:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:04:41 Starting overwatch
	
	
	==> storage-provisioner [926327055a53dcd7c24cb4836a69ddf126c05855312a52841c42a80393b3226a] <==
	I1216 05:04:25.312199       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:04:55.315468       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d3f4510bb12b0c21aecb8176740504ed535eda6166982171d22389ee45dc29d4] <==
	I1216 05:04:56.146405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:04:56.154381       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:04:56.154414       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 05:05:13.552405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:05:13.552559       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545075_7e8e2828-29d0-45ba-8fe3-878e82b9743b!
	I1216 05:05:13.552569       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c47abbd-d74f-4d30-b214-42c151476c8d", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-545075_7e8e2828-29d0-45ba-8fe3-878e82b9743b became leader
	I1216 05:05:13.652832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545075_7e8e2828-29d0-45ba-8fe3-878e82b9743b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-545075 -n old-k8s-version-545075
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-545075 -n old-k8s-version-545075: exit status 2 (328.268522ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-545075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-990631 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-990631 --alsologtostderr -v=1: exit status 80 (1.656144232s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-990631 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:05:49.927600  297985 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:49.927740  297985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:49.927756  297985 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:49.927764  297985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:49.928164  297985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:49.928529  297985 out.go:368] Setting JSON to false
	I1216 05:05:49.928553  297985 mustload.go:66] Loading cluster: no-preload-990631
	I1216 05:05:49.929104  297985 config.go:182] Loaded profile config "no-preload-990631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:49.929507  297985 cli_runner.go:164] Run: docker container inspect no-preload-990631 --format={{.State.Status}}
	I1216 05:05:49.950107  297985 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:05:49.950473  297985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:50.028584  297985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-16 05:05:50.018135212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:50.029549  297985 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-990631 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 05:05:50.031596  297985 out.go:179] * Pausing node no-preload-990631 ... 
	I1216 05:05:50.032889  297985 host.go:66] Checking if "no-preload-990631" exists ...
	I1216 05:05:50.033211  297985 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:50.033276  297985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-990631
	I1216 05:05:50.052747  297985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/no-preload-990631/id_rsa Username:docker}
	I1216 05:05:50.146075  297985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:50.158885  297985 pause.go:52] kubelet running: true
	I1216 05:05:50.158955  297985 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:50.346724  297985 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:50.346819  297985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:50.430864  297985 cri.go:89] found id: "75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b"
	I1216 05:05:50.430920  297985 cri.go:89] found id: "190d84165a587af2c099887138e64ae3db4eef4492add527746874d69791a022"
	I1216 05:05:50.430927  297985 cri.go:89] found id: "e3911104d1e183d624a37bd5240f1cb4daad3ed819947b773c46def5220cfd0d"
	I1216 05:05:50.430931  297985 cri.go:89] found id: "9c81e85ff9b8473632b2161e2625db403f46ed30b89331fc578fdc16bf337529"
	I1216 05:05:50.430935  297985 cri.go:89] found id: "9a90b9a88d9e59d0d5baa394070a6cb12b35ea8f5a34889d1d7055ad38ce3468"
	I1216 05:05:50.430939  297985 cri.go:89] found id: "4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5"
	I1216 05:05:50.430944  297985 cri.go:89] found id: "c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637"
	I1216 05:05:50.430949  297985 cri.go:89] found id: "3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5"
	I1216 05:05:50.430953  297985 cri.go:89] found id: "f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9"
	I1216 05:05:50.430976  297985 cri.go:89] found id: "20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259"
	I1216 05:05:50.430985  297985 cri.go:89] found id: "9fb7cb0f528cf2b4dfef928c1799678a2d9f7afec43110df5511ad9c2f36a8c5"
	I1216 05:05:50.430989  297985 cri.go:89] found id: ""
	I1216 05:05:50.431055  297985 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:50.444968  297985 retry.go:31] will retry after 263.453236ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:50Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:50.708712  297985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:50.726452  297985 pause.go:52] kubelet running: false
	I1216 05:05:50.726544  297985 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:50.885204  297985 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:50.885306  297985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:50.956120  297985 cri.go:89] found id: "75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b"
	I1216 05:05:50.956147  297985 cri.go:89] found id: "190d84165a587af2c099887138e64ae3db4eef4492add527746874d69791a022"
	I1216 05:05:50.956153  297985 cri.go:89] found id: "e3911104d1e183d624a37bd5240f1cb4daad3ed819947b773c46def5220cfd0d"
	I1216 05:05:50.956158  297985 cri.go:89] found id: "9c81e85ff9b8473632b2161e2625db403f46ed30b89331fc578fdc16bf337529"
	I1216 05:05:50.956163  297985 cri.go:89] found id: "9a90b9a88d9e59d0d5baa394070a6cb12b35ea8f5a34889d1d7055ad38ce3468"
	I1216 05:05:50.956168  297985 cri.go:89] found id: "4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5"
	I1216 05:05:50.956173  297985 cri.go:89] found id: "c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637"
	I1216 05:05:50.956178  297985 cri.go:89] found id: "3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5"
	I1216 05:05:50.956199  297985 cri.go:89] found id: "f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9"
	I1216 05:05:50.956207  297985 cri.go:89] found id: "20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259"
	I1216 05:05:50.956212  297985 cri.go:89] found id: "9fb7cb0f528cf2b4dfef928c1799678a2d9f7afec43110df5511ad9c2f36a8c5"
	I1216 05:05:50.956222  297985 cri.go:89] found id: ""
	I1216 05:05:50.956272  297985 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:50.969888  297985 retry.go:31] will retry after 285.222462ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:50Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:51.256194  297985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:51.270494  297985 pause.go:52] kubelet running: false
	I1216 05:05:51.270547  297985 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:51.419738  297985 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:51.419796  297985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:51.490326  297985 cri.go:89] found id: "75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b"
	I1216 05:05:51.490354  297985 cri.go:89] found id: "190d84165a587af2c099887138e64ae3db4eef4492add527746874d69791a022"
	I1216 05:05:51.490360  297985 cri.go:89] found id: "e3911104d1e183d624a37bd5240f1cb4daad3ed819947b773c46def5220cfd0d"
	I1216 05:05:51.490365  297985 cri.go:89] found id: "9c81e85ff9b8473632b2161e2625db403f46ed30b89331fc578fdc16bf337529"
	I1216 05:05:51.490370  297985 cri.go:89] found id: "9a90b9a88d9e59d0d5baa394070a6cb12b35ea8f5a34889d1d7055ad38ce3468"
	I1216 05:05:51.490375  297985 cri.go:89] found id: "4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5"
	I1216 05:05:51.490380  297985 cri.go:89] found id: "c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637"
	I1216 05:05:51.490385  297985 cri.go:89] found id: "3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5"
	I1216 05:05:51.490389  297985 cri.go:89] found id: "f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9"
	I1216 05:05:51.490395  297985 cri.go:89] found id: "20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259"
	I1216 05:05:51.490401  297985 cri.go:89] found id: "9fb7cb0f528cf2b4dfef928c1799678a2d9f7afec43110df5511ad9c2f36a8c5"
	I1216 05:05:51.490403  297985 cri.go:89] found id: ""
	I1216 05:05:51.490440  297985 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:51.505274  297985 out.go:203] 
	W1216 05:05:51.506514  297985 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 05:05:51.506536  297985 out.go:285] * 
	* 
	W1216 05:05:51.511596  297985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:05:51.513422  297985 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-990631 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-990631
helpers_test.go:244: (dbg) docker inspect no-preload-990631:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1",
	        "Created": "2025-12-16T05:03:37.47946154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:04:52.509131394Z",
	            "FinishedAt": "2025-12-16T05:04:51.501677195Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/hosts",
	        "LogPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1-json.log",
	        "Name": "/no-preload-990631",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-990631:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-990631",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1",
	                "LowerDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-990631",
	                "Source": "/var/lib/docker/volumes/no-preload-990631/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-990631",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-990631",
	                "name.minikube.sigs.k8s.io": "no-preload-990631",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "970ff16c69ac958cb41d090a6f22c31f0474927c9c40baf7961b33aa77fb3342",
	            "SandboxKey": "/var/run/docker/netns/970ff16c69ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-990631": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a201df1b3d2f7b8b13672d056e69c36960ed0846b69e35e8077f1fc3c7bcec36",
	                    "EndpointID": "f836912d023449691945eabde441c3677e9ba6d8a1478fb2157fed55ea5988d9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:6f:39:6b:59:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-990631",
	                        "ac137694b742"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631: exit status 2 (348.676353ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-990631 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-990631 logs -n 25: (1.478922636s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                         │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                                      │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:05:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:05:28.619499  293223 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:28.619651  293223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:28.619666  293223 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:28.619672  293223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:28.619982  293223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:28.620585  293223 out.go:368] Setting JSON to false
	I1216 05:05:28.621902  293223 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2880,"bootTime":1765858649,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:05:28.621960  293223 start.go:143] virtualization: kvm guest
	I1216 05:05:28.624132  293223 out.go:179] * [default-k8s-diff-port-375252] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:05:28.625828  293223 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:05:28.625847  293223 notify.go:221] Checking for updates...
	I1216 05:05:28.628337  293223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:05:28.629978  293223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:28.631193  293223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:05:28.632390  293223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:05:28.633511  293223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:05:28.635178  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:28.635713  293223 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:05:28.661922  293223 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:05:28.662051  293223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:28.718273  293223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-16 05:05:28.708319047 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:28.718425  293223 docker.go:319] overlay module found
	I1216 05:05:28.719951  293223 out.go:179] * Using the docker driver based on existing profile
	I1216 05:05:28.721068  293223 start.go:309] selected driver: docker
	I1216 05:05:28.721089  293223 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:28.721191  293223 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:05:28.721731  293223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:28.788963  293223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-16 05:05:28.778467948 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:28.789410  293223 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:28.789448  293223 cni.go:84] Creating CNI manager for ""
	I1216 05:05:28.789523  293223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:28.789569  293223 start.go:353] cluster config:
	{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:28.791508  293223 out.go:179] * Starting "default-k8s-diff-port-375252" primary control-plane node in "default-k8s-diff-port-375252" cluster
	I1216 05:05:28.792954  293223 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:05:28.794079  293223 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	W1216 05:05:25.308136  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:27.308269  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:28.795105  293223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:28.795150  293223 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:05:28.795162  293223 cache.go:65] Caching tarball of preloaded images
	I1216 05:05:28.795205  293223 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:05:28.795272  293223 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:05:28.795286  293223 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:05:28.795400  293223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:05:28.816986  293223 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:05:28.817006  293223 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:05:28.817050  293223 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:05:28.817103  293223 start.go:360] acquireMachinesLock for default-k8s-diff-port-375252: {Name:mkb8cd554afbe4d541db5e350d6207558dad5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:05:28.817174  293223 start.go:364] duration metric: took 39.023µs to acquireMachinesLock for "default-k8s-diff-port-375252"
	I1216 05:05:28.817192  293223 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:05:28.817201  293223 fix.go:54] fixHost starting: 
	I1216 05:05:28.817485  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:28.837506  293223 fix.go:112] recreateIfNeeded on default-k8s-diff-port-375252: state=Stopped err=<nil>
	W1216 05:05:28.837533  293223 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:05:25.049152  292503 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:05:25.049357  292503 start.go:159] libmachine.API.Create for "newest-cni-418580" (driver="docker")
	I1216 05:05:25.049412  292503 client.go:173] LocalClient.Create starting
	I1216 05:05:25.049478  292503 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:05:25.049507  292503 main.go:143] libmachine: Decoding PEM data...
	I1216 05:05:25.049530  292503 main.go:143] libmachine: Parsing certificate...
	I1216 05:05:25.049578  292503 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:05:25.049595  292503 main.go:143] libmachine: Decoding PEM data...
	I1216 05:05:25.049605  292503 main.go:143] libmachine: Parsing certificate...
	I1216 05:05:25.049898  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:05:25.066218  292503 cli_runner.go:211] docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:05:25.066280  292503 network_create.go:284] running [docker network inspect newest-cni-418580] to gather additional debugging logs...
	I1216 05:05:25.066296  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580
	W1216 05:05:25.082484  292503 cli_runner.go:211] docker network inspect newest-cni-418580 returned with exit code 1
	I1216 05:05:25.082512  292503 network_create.go:287] error running [docker network inspect newest-cni-418580]: docker network inspect newest-cni-418580: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-418580 not found
	I1216 05:05:25.082540  292503 network_create.go:289] output of [docker network inspect newest-cni-418580]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-418580 not found
	
	** /stderr **
	I1216 05:05:25.082645  292503 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:25.099947  292503 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:05:25.100678  292503 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:05:25.101416  292503 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:05:25.102201  292503 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4cd001204df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:4c:f5:dc:0a:90} reservation:<nil>}
	I1216 05:05:25.103269  292503 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f240c0}
	I1216 05:05:25.103298  292503 network_create.go:124] attempt to create docker network newest-cni-418580 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 05:05:25.103367  292503 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-418580 newest-cni-418580
	I1216 05:05:25.151692  292503 network_create.go:108] docker network newest-cni-418580 192.168.85.0/24 created
	I1216 05:05:25.151725  292503 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-418580" container
	I1216 05:05:25.151806  292503 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:05:25.168787  292503 cli_runner.go:164] Run: docker volume create newest-cni-418580 --label name.minikube.sigs.k8s.io=newest-cni-418580 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:05:25.186120  292503 oci.go:103] Successfully created a docker volume newest-cni-418580
	I1216 05:05:25.186217  292503 cli_runner.go:164] Run: docker run --rm --name newest-cni-418580-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-418580 --entrypoint /usr/bin/test -v newest-cni-418580:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:05:25.583110  292503 oci.go:107] Successfully prepared a docker volume newest-cni-418580
	I1216 05:05:25.583174  292503 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:05:25.583183  292503 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:05:25.583245  292503 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-418580:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:05:28.338637  292503 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-418580:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (2.755349769s)
	I1216 05:05:28.338666  292503 kic.go:203] duration metric: took 2.755479724s to extract preloaded images to volume ...
	W1216 05:05:28.338763  292503 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:05:28.338816  292503 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:05:28.338866  292503 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:05:28.403669  292503 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-418580 --name newest-cni-418580 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-418580 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-418580 --network newest-cni-418580 --ip 192.168.85.2 --volume newest-cni-418580:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 05:05:28.722054  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Running}}
	I1216 05:05:28.742885  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.766670  292503 cli_runner.go:164] Run: docker exec newest-cni-418580 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:05:28.817325  292503 oci.go:144] the created container "newest-cni-418580" has a running status.
	I1216 05:05:28.817353  292503 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa...
	I1216 05:05:28.858625  292503 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:05:28.890286  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.914645  292503 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:05:28.914674  292503 kic_runner.go:114] Args: [docker exec --privileged newest-cni-418580 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:05:28.970557  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.995784  292503 machine.go:94] provisionDockerMachine start ...
	I1216 05:05:28.995879  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:29.018594  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:29.018845  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:29.018853  292503 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:05:29.019705  292503 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49150->127.0.0.1:33097: read: connection reset by peer
	W1216 05:05:29.106433  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:31.606893  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:28.839266  293223 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-375252" ...
	I1216 05:05:28.839325  293223 cli_runner.go:164] Run: docker start default-k8s-diff-port-375252
	I1216 05:05:29.113119  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:29.132662  293223 kic.go:430] container "default-k8s-diff-port-375252" state is running.
	I1216 05:05:29.133094  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:29.152639  293223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:05:29.152879  293223 machine.go:94] provisionDockerMachine start ...
	I1216 05:05:29.152970  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:29.171277  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:29.171536  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:29.171549  293223 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:05:29.172205  293223 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51300->127.0.0.1:33103: read: connection reset by peer
	I1216 05:05:32.297657  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:05:32.297684  293223 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-375252"
	I1216 05:05:32.297751  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.318798  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.319051  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.319069  293223 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-375252 && echo "default-k8s-diff-port-375252" | sudo tee /etc/hostname
	I1216 05:05:32.456847  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:05:32.456922  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.476706  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.477045  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.477072  293223 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-375252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-375252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-375252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:32.606389  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:32.606417  293223 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:32.606439  293223 ubuntu.go:190] setting up certificates
	I1216 05:05:32.606452  293223 provision.go:84] configureAuth start
	I1216 05:05:32.606505  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:32.627629  293223 provision.go:143] copyHostCerts
	I1216 05:05:32.627685  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:32.627701  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:32.627749  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:32.627836  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:32.627845  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:32.627865  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:32.627913  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:32.627921  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:32.627939  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:32.627997  293223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-375252 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-375252 localhost minikube]
	I1216 05:05:32.730496  293223 provision.go:177] copyRemoteCerts
	I1216 05:05:32.730565  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:32.730600  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.751146  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:32.843812  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:32.861358  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 05:05:32.879619  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:05:32.898030  293223 provision.go:87] duration metric: took 291.544808ms to configureAuth
	I1216 05:05:32.898063  293223 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:32.898285  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:32.898408  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.918526  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.918833  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.918878  293223 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:33.236179  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:33.236212  293223 machine.go:97] duration metric: took 4.083315908s to provisionDockerMachine
	I1216 05:05:33.236229  293223 start.go:293] postStartSetup for "default-k8s-diff-port-375252" (driver="docker")
	I1216 05:05:33.236246  293223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:33.236317  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:33.236391  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.255124  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.349976  293223 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:33.353576  293223 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:33.353609  293223 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:33.353622  293223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:33.353677  293223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:33.353784  293223 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:33.353908  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:33.361552  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:33.379312  293223 start.go:296] duration metric: took 143.06602ms for postStartSetup
	I1216 05:05:33.379398  293223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:33.379447  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.400517  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.491075  293223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:33.495538  293223 fix.go:56] duration metric: took 4.678331684s for fixHost
	I1216 05:05:33.495565  293223 start.go:83] releasing machines lock for "default-k8s-diff-port-375252", held for 4.678379719s
	I1216 05:05:33.495640  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:33.516725  293223 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:33.516805  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.516816  293223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:33.516895  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.536965  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.537322  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	W1216 05:05:29.308470  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:31.807551  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:32.147295  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-418580
	
	I1216 05:05:32.147323  292503 ubuntu.go:182] provisioning hostname "newest-cni-418580"
	I1216 05:05:32.147387  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.165073  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.165297  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.165310  292503 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-418580 && echo "newest-cni-418580" | sudo tee /etc/hostname
	I1216 05:05:32.302827  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-418580
	
	I1216 05:05:32.302918  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.322729  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.323008  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.323056  292503 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-418580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-418580/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-418580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:32.450471  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:32.450502  292503 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:32.450536  292503 ubuntu.go:190] setting up certificates
	I1216 05:05:32.450556  292503 provision.go:84] configureAuth start
	I1216 05:05:32.450612  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:32.471395  292503 provision.go:143] copyHostCerts
	I1216 05:05:32.471462  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:32.471473  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:32.471555  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:32.471711  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:32.471728  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:32.471774  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:32.471881  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:32.471892  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:32.471939  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:32.472109  292503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-418580 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-418580]
	I1216 05:05:32.499918  292503 provision.go:177] copyRemoteCerts
	I1216 05:05:32.499967  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:32.499999  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.518449  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:32.614228  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:32.635678  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:05:32.653888  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:05:32.672293  292503 provision.go:87] duration metric: took 221.714395ms to configureAuth
	I1216 05:05:32.672321  292503 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:32.672501  292503 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:32.672618  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.693226  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.693545  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.693567  292503 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:32.968063  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:32.968093  292503 machine.go:97] duration metric: took 3.972285388s to provisionDockerMachine
	I1216 05:05:32.968105  292503 client.go:176] duration metric: took 7.918682906s to LocalClient.Create
	I1216 05:05:32.968125  292503 start.go:167] duration metric: took 7.918767525s to libmachine.API.Create "newest-cni-418580"
	I1216 05:05:32.968137  292503 start.go:293] postStartSetup for "newest-cni-418580" (driver="docker")
	I1216 05:05:32.968155  292503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:32.968222  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:32.968275  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.987417  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.081927  292503 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:33.086603  292503 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:33.086638  292503 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:33.086652  292503 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:33.086706  292503 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:33.086801  292503 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:33.086920  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:33.095668  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:33.117594  292503 start.go:296] duration metric: took 149.443086ms for postStartSetup
	I1216 05:05:33.117910  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:33.136597  292503 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/config.json ...
	I1216 05:05:33.136935  292503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:33.136989  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.157818  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.249294  292503 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:33.254260  292503 start.go:128] duration metric: took 8.207842149s to createHost
	I1216 05:05:33.254284  292503 start.go:83] releasing machines lock for "newest-cni-418580", held for 8.207977773s
	I1216 05:05:33.254363  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:33.272954  292503 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:33.273028  292503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:33.273042  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.273100  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.291480  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.293050  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.382769  292503 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:33.442424  292503 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:33.477702  292503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:33.482676  292503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:33.482740  292503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:33.509906  292503 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:05:33.509941  292503 start.go:496] detecting cgroup driver to use...
	I1216 05:05:33.509974  292503 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:33.510045  292503 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:33.529423  292503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:33.545150  292503 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:33.545215  292503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:33.563032  292503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:33.581565  292503 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:33.666314  292503 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:33.758723  292503 docker.go:234] disabling docker service ...
	I1216 05:05:33.758784  292503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:33.777100  292503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:33.790420  292503 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:33.893910  292503 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:33.985401  292503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:33.997878  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:34.012256  292503 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:34.012314  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.022649  292503 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:34.022700  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.031562  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.042658  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.053710  292503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:34.062165  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.070697  292503 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.083923  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.092579  292503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:34.099986  292503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:34.107536  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.189571  292503 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:34.334382  292503 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:34.334464  292503 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:34.338658  292503 start.go:564] Will wait 60s for crictl version
	I1216 05:05:34.338717  292503 ssh_runner.go:195] Run: which crictl
	I1216 05:05:34.342328  292503 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:34.366604  292503 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:34.366695  292503 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.396642  292503 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.433283  292503 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 05:05:34.435360  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:34.453408  292503 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:34.457897  292503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.470954  292503 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 05:05:33.681706  293223 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:33.688766  293223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:33.730139  293223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:33.735582  293223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:33.735658  293223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:33.743577  293223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:05:33.743600  293223 start.go:496] detecting cgroup driver to use...
	I1216 05:05:33.743629  293223 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:33.743661  293223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:33.757282  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:33.769587  293223 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:33.769641  293223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:33.785195  293223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:33.797271  293223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:33.890845  293223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:33.981546  293223 docker.go:234] disabling docker service ...
	I1216 05:05:33.981625  293223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:33.997128  293223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:34.009761  293223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:34.091773  293223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:34.181978  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:34.195138  293223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:34.209731  293223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:34.209779  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.218732  293223 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:34.218793  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.227814  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.236762  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.246292  293223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:34.254357  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.262928  293223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.271401  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.280072  293223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:34.288086  293223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:34.295663  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.380378  293223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:34.524358  293223 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:34.524440  293223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:34.528707  293223 start.go:564] Will wait 60s for crictl version
	I1216 05:05:34.528766  293223 ssh_runner.go:195] Run: which crictl
	I1216 05:05:34.532850  293223 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:34.559122  293223 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:34.559232  293223 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.588698  293223 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.622232  293223 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:05:34.472170  292503 kubeadm.go:884] updating cluster {Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:34.472345  292503 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:05:34.472410  292503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.506214  292503 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.506240  292503 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:34.506293  292503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.533288  292503 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.533310  292503 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:34.533332  292503 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:05:34.533435  292503 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-418580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:34.533515  292503 ssh_runner.go:195] Run: crio config
	I1216 05:05:34.582963  292503 cni.go:84] Creating CNI manager for ""
	I1216 05:05:34.582990  292503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:34.583009  292503 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 05:05:34.583050  292503 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-418580 NodeName:newest-cni-418580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:34.583227  292503 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-418580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:34.583301  292503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:05:34.592247  292503 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:34.592309  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:34.600451  292503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 05:05:34.615740  292503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:05:34.631629  292503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 05:05:34.645966  292503 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:34.649983  292503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.660983  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.754997  292503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:34.777628  292503 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580 for IP: 192.168.85.2
	I1216 05:05:34.777642  292503 certs.go:195] generating shared ca certs ...
	I1216 05:05:34.777658  292503 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.777857  292503 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:34.777914  292503 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:34.777931  292503 certs.go:257] generating profile certs ...
	I1216 05:05:34.778003  292503 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key
	I1216 05:05:34.778143  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt with IP's: []
	I1216 05:05:34.623929  293223 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:34.642569  293223 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:34.646797  293223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.657490  293223 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:34.657635  293223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:34.657700  293223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.694359  293223 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.694383  293223 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:34.694449  293223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.722765  293223 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.722791  293223 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:34.722798  293223 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1216 05:05:34.722885  293223 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-375252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:34.722943  293223 ssh_runner.go:195] Run: crio config
	I1216 05:05:34.768295  293223 cni.go:84] Creating CNI manager for ""
	I1216 05:05:34.768319  293223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:34.768330  293223 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:05:34.768351  293223 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-375252 NodeName:default-k8s-diff-port-375252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:34.768461  293223 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-375252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:34.768522  293223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:05:34.776980  293223 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:34.777057  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:34.784977  293223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1216 05:05:34.799152  293223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:05:34.814226  293223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1216 05:05:34.829713  293223 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:34.833416  293223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.843862  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.923578  293223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:34.953581  293223 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252 for IP: 192.168.76.2
	I1216 05:05:34.953601  293223 certs.go:195] generating shared ca certs ...
	I1216 05:05:34.953616  293223 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.953767  293223 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:34.953838  293223 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:34.953859  293223 certs.go:257] generating profile certs ...
	I1216 05:05:34.953970  293223 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key
	I1216 05:05:34.954080  293223 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f
	I1216 05:05:34.954137  293223 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key
	I1216 05:05:34.954290  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:34.954338  293223 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:34.954352  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:34.954447  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:34.954496  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:34.954531  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:34.954590  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:34.955229  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:34.975322  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:34.994253  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:35.014878  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:35.040118  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 05:05:35.059441  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:05:35.076865  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:35.094221  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:05:35.112726  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:35.130358  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:35.148722  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:35.166552  293223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:35.179253  293223 ssh_runner.go:195] Run: openssl version
	I1216 05:05:35.185464  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.194548  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:35.202787  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.207145  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.207204  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.243264  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:35.250997  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.258211  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:35.265357  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.269279  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.269321  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.304617  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.312873  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.320543  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:35.328266  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.331987  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.332071  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.369465  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:35.377085  293223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:35.380947  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:05:35.422121  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:05:35.462335  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:05:35.508430  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:05:35.559688  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:05:35.615416  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:05:35.663433  293223 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:35.663546  293223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:35.663626  293223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:35.697125  293223 cri.go:89] found id: "3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45"
	I1216 05:05:35.697150  293223 cri.go:89] found id: "fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3"
	I1216 05:05:35.697156  293223 cri.go:89] found id: "bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff"
	I1216 05:05:35.697169  293223 cri.go:89] found id: "5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01"
	I1216 05:05:35.697174  293223 cri.go:89] found id: ""
	I1216 05:05:35.697297  293223 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:05:35.710419  293223 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:35Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:35.710548  293223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:35.718597  293223 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:05:35.718613  293223 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:05:35.718655  293223 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:05:35.727178  293223 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:05:35.728054  293223 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-375252" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:35.728609  293223 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-375252" cluster setting kubeconfig missing "default-k8s-diff-port-375252" context setting]
	I1216 05:05:35.729803  293223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.732044  293223 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:05:35.741439  293223 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 05:05:35.741468  293223 kubeadm.go:602] duration metric: took 22.850213ms to restartPrimaryControlPlane
	I1216 05:05:35.741479  293223 kubeadm.go:403] duration metric: took 78.055706ms to StartCluster
	I1216 05:05:35.741495  293223 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.741561  293223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:35.743663  293223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.743920  293223 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:35.744192  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:35.744255  293223 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:35.744337  293223 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744353  293223 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.744361  293223 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:05:35.744385  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.744837  293223 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744857  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.744859  293223 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744867  293223 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.744878  293223 addons.go:248] addon dashboard should already be in state true
	I1216 05:05:35.744882  293223 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-375252"
	I1216 05:05:35.744905  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.745234  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.745413  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.746307  293223 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:35.747617  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:35.779083  293223 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:05:35.779717  293223 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.779773  293223 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:05:35.779803  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.780704  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.782924  293223 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:05:35.782935  293223 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:34.912976  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt ...
	I1216 05:05:34.913002  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt: {Name:mkf026a43bf34828560cdd861dc967a4da3a0a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.913190  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key ...
	I1216 05:05:34.913205  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key: {Name:mk9f0f25e1ac61a51fd17fa02f026219d2b811d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.913291  292503 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e
	I1216 05:05:34.913325  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1216 05:05:34.970132  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e ...
	I1216 05:05:34.970158  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e: {Name:mk2d44e3904e6a658ff0ee37692a05dfb5d718cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.970338  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e ...
	I1216 05:05:34.970360  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e: {Name:mkca5d8ab902ee32a3562f650b1165f4e10184d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.970489  292503 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt
	I1216 05:05:34.970612  292503 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key
	I1216 05:05:34.970698  292503 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key
	I1216 05:05:34.970726  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt with IP's: []
	I1216 05:05:35.230717  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt ...
	I1216 05:05:35.230762  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt: {Name:mk2bf2e2cf7c916f101948379e3dbb6fdb61a8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.230936  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key ...
	I1216 05:05:35.230951  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key: {Name:mkf0aafa828bb13a430c43eed8a9d89f985fe446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.231223  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:35.231284  292503 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:35.231307  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:35.231346  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:35.231400  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:35.231441  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:35.231498  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:35.232006  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:35.250349  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:35.267912  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:35.285259  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:35.302305  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:05:35.320455  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:05:35.338139  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:35.355217  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:05:35.372674  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:35.396550  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:35.414853  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:35.433117  292503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:35.445803  292503 ssh_runner.go:195] Run: openssl version
	I1216 05:05:35.452761  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.462981  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:35.472831  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.478677  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.478741  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.527788  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:35.538626  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:05:35.549759  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.561348  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:35.569778  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.574227  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.574289  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.628396  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:35.639703  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:05:35.650632  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.659686  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:35.669120  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.673306  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.673366  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.722247  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.731511  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.740868  292503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:35.745336  292503 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:05:35.745401  292503 kubeadm.go:401] StartCluster: {Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:35.745487  292503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:35.746058  292503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:35.801678  292503 cri.go:89] found id: ""
	I1216 05:05:35.801751  292503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:35.816616  292503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:05:35.828849  292503 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:05:35.828906  292503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:05:35.849259  292503 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:05:35.849285  292503 kubeadm.go:158] found existing configuration files:
	
	I1216 05:05:35.849332  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:05:35.861581  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:05:35.861651  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:05:35.872359  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:05:35.882005  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:05:35.882094  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:05:35.892470  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:05:35.903276  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:05:35.903332  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:05:35.910839  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:05:35.918764  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:05:35.918817  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:05:35.927327  292503 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:05:35.983497  292503 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:05:35.983639  292503 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:05:36.078326  292503 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:05:36.078423  292503 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:05:36.078478  292503 kubeadm.go:319] OS: Linux
	I1216 05:05:36.078538  292503 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:05:36.078632  292503 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:05:36.078725  292503 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:05:36.078801  292503 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:05:36.078868  292503 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:05:36.078935  292503 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:05:36.079014  292503 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:05:36.079098  292503 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:05:36.155571  292503 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:05:36.155718  292503 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:05:36.155845  292503 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:05:36.166440  292503 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 05:05:34.106689  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:36.109040  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:36.607281  284202 pod_ready.go:94] pod "coredns-7d764666f9-7l2ds" is "Ready"
	I1216 05:05:36.607311  284202 pod_ready.go:86] duration metric: took 34.506370037s for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.609962  284202 pod_ready.go:83] waiting for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.613794  284202 pod_ready.go:94] pod "etcd-no-preload-990631" is "Ready"
	I1216 05:05:36.613816  284202 pod_ready.go:86] duration metric: took 3.832267ms for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.615814  284202 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.619531  284202 pod_ready.go:94] pod "kube-apiserver-no-preload-990631" is "Ready"
	I1216 05:05:36.619551  284202 pod_ready.go:86] duration metric: took 3.7182ms for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.621267  284202 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.805520  284202 pod_ready.go:94] pod "kube-controller-manager-no-preload-990631" is "Ready"
	I1216 05:05:36.805552  284202 pod_ready.go:86] duration metric: took 184.26649ms for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.004943  284202 pod_ready.go:83] waiting for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.405104  284202 pod_ready.go:94] pod "kube-proxy-2bjbr" is "Ready"
	I1216 05:05:37.405148  284202 pod_ready.go:86] duration metric: took 400.175347ms for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.605530  284202 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:38.005242  284202 pod_ready.go:94] pod "kube-scheduler-no-preload-990631" is "Ready"
	I1216 05:05:38.005269  284202 pod_ready.go:86] duration metric: took 399.71845ms for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:38.005280  284202 pod_ready.go:40] duration metric: took 35.907405331s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:38.078833  284202 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:05:38.083862  284202 out.go:179] * Done! kubectl is now configured to use "no-preload-990631" cluster and "default" namespace by default
	I1216 05:05:36.168529  292503 out.go:252]   - Generating certificates and keys ...
	I1216 05:05:36.168629  292503 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:05:36.168732  292503 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:05:36.271180  292503 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:05:36.355639  292503 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:05:36.514877  292503 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:05:36.614071  292503 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:05:36.827128  292503 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:05:36.827351  292503 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-418580] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:05:36.974476  292503 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:05:36.974620  292503 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-418580] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:05:37.026221  292503 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 05:05:37.227397  292503 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 05:05:37.493691  292503 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 05:05:37.493853  292503 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:05:37.662278  292503 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:05:37.748294  292503 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:05:37.901537  292503 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:05:38.083466  292503 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:05:38.279485  292503 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:05:38.280369  292503 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:05:38.285287  292503 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:05:35.784693  293223 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:35.784717  293223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:35.784735  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:05:35.784752  293223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:05:35.784789  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.784814  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.819593  293223 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:35.819612  293223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:35.819664  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.819933  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.823853  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.844792  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.921834  293223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:35.938442  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:35.941483  293223 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-375252" to be "Ready" ...
	I1216 05:05:35.950146  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:05:35.950165  293223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:05:35.959892  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:35.969036  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:05:35.969063  293223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:05:35.989078  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:05:35.989103  293223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:05:36.011127  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:05:36.011150  293223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:05:36.031446  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:05:36.031478  293223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:05:36.045414  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:05:36.045444  293223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:05:36.064565  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:05:36.064589  293223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:05:36.081324  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:05:36.081348  293223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:05:36.095008  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:36.095044  293223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:05:36.112771  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:38.170264  293223 node_ready.go:49] node "default-k8s-diff-port-375252" is "Ready"
	I1216 05:05:38.170298  293223 node_ready.go:38] duration metric: took 2.228781477s for node "default-k8s-diff-port-375252" to be "Ready" ...
	I1216 05:05:38.170312  293223 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:38.170360  293223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:38.776267  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.837791808s)
	I1216 05:05:38.776306  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.816384453s)
	I1216 05:05:38.776428  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.663623652s)
	I1216 05:05:38.776458  293223 api_server.go:72] duration metric: took 3.032511707s to wait for apiserver process to appear ...
	I1216 05:05:38.776471  293223 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:38.776496  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:38.778346  293223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-375252 addons enable metrics-server
	
	I1216 05:05:38.781799  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:38.781820  293223 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:38.784278  293223 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1216 05:05:33.808006  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:35.812587  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:38.308806  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:38.287293  292503 out.go:252]   - Booting up control plane ...
	I1216 05:05:38.287409  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:05:38.287505  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:05:38.287591  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:05:38.306562  292503 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:05:38.306822  292503 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:05:38.315641  292503 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:05:38.317230  292503 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:05:38.317321  292503 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:05:38.439332  292503 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:05:38.439522  292503 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:05:38.940884  292503 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.794985ms
	I1216 05:05:38.943858  292503 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 05:05:38.943984  292503 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 05:05:38.944138  292503 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 05:05:38.944228  292503 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 05:05:39.947945  292503 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004000869s
	I1216 05:05:41.136529  292503 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.192438136s
	I1216 05:05:42.945750  292503 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001716849s
	I1216 05:05:42.965946  292503 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 05:05:42.978062  292503 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 05:05:42.988785  292503 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 05:05:42.989107  292503 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-418580 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 05:05:42.998633  292503 kubeadm.go:319] [bootstrap-token] Using token: w2w8nj.tpu6zggrwl1bblxh
	I1216 05:05:38.785917  293223 addons.go:530] duration metric: took 3.04166463s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:05:39.277101  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:39.282770  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:39.282808  293223 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:39.777190  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:39.781241  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1216 05:05:39.782224  293223 api_server.go:141] control plane version: v1.34.2
	I1216 05:05:39.782245  293223 api_server.go:131] duration metric: took 1.005765809s to wait for apiserver health ...
	I1216 05:05:39.782254  293223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:39.785805  293223 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:39.785846  293223 system_pods.go:61] "coredns-66bc5c9577-65x5z" [8c610230-fb3c-4bef-a6d3-8870276ed93e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:39.785855  293223 system_pods.go:61] "etcd-default-k8s-diff-port-375252" [c04dc1e6-e72a-4915-8ed7-4976b69962c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:39.785870  293223 system_pods.go:61] "kindnet-xml94" [2f0ff435-e0ff-441e-bdf8-0cf9a6da784c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:39.785882  293223 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-375252" [51755996-a6d5-4427-8ecf-d5635d5bae94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:39.785897  293223 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-375252" [01874cfa-681a-48b1-8378-2f33c21185ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:39.785908  293223 system_pods.go:61] "kube-proxy-dmw9t" [c72d3d28-eaa9-4a35-b7b9-5207edf40372] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:39.785916  293223 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-375252" [e1c1c032-6d2a-4ca9-bf67-137e7f061874] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:39.785927  293223 system_pods.go:61] "storage-provisioner" [e819c1d3-f079-4b3e-a4b1-9d2cfbf37344] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:39.785934  293223 system_pods.go:74] duration metric: took 3.674729ms to wait for pod list to return data ...
	I1216 05:05:39.785950  293223 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:39.788192  293223 default_sa.go:45] found service account: "default"
	I1216 05:05:39.788211  293223 default_sa.go:55] duration metric: took 2.254358ms for default service account to be created ...
	I1216 05:05:39.788220  293223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:39.790710  293223 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:39.790733  293223 system_pods.go:89] "coredns-66bc5c9577-65x5z" [8c610230-fb3c-4bef-a6d3-8870276ed93e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:39.790755  293223 system_pods.go:89] "etcd-default-k8s-diff-port-375252" [c04dc1e6-e72a-4915-8ed7-4976b69962c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:39.790763  293223 system_pods.go:89] "kindnet-xml94" [2f0ff435-e0ff-441e-bdf8-0cf9a6da784c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:39.790768  293223 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-375252" [51755996-a6d5-4427-8ecf-d5635d5bae94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:39.790775  293223 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-375252" [01874cfa-681a-48b1-8378-2f33c21185ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:39.790782  293223 system_pods.go:89] "kube-proxy-dmw9t" [c72d3d28-eaa9-4a35-b7b9-5207edf40372] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:39.790788  293223 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-375252" [e1c1c032-6d2a-4ca9-bf67-137e7f061874] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:39.790796  293223 system_pods.go:89] "storage-provisioner" [e819c1d3-f079-4b3e-a4b1-9d2cfbf37344] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:39.790802  293223 system_pods.go:126] duration metric: took 2.577172ms to wait for k8s-apps to be running ...
	I1216 05:05:39.790808  293223 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:39.790842  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:39.803851  293223 system_svc.go:56] duration metric: took 13.033531ms WaitForService to wait for kubelet
	I1216 05:05:39.803883  293223 kubeadm.go:587] duration metric: took 4.059935198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:39.803905  293223 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:39.807402  293223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:39.807433  293223 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:39.807450  293223 node_conditions.go:105] duration metric: took 3.538982ms to run NodePressure ...
	I1216 05:05:39.807465  293223 start.go:242] waiting for startup goroutines ...
	I1216 05:05:39.807473  293223 start.go:247] waiting for cluster config update ...
	I1216 05:05:39.807485  293223 start.go:256] writing updated cluster config ...
	I1216 05:05:39.807774  293223 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:39.811763  293223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:39.815359  293223 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-65x5z" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:41.820227  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:40.310699  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:42.309120  285878 pod_ready.go:94] pod "coredns-66bc5c9577-47h64" is "Ready"
	I1216 05:05:42.309150  285878 pod_ready.go:86] duration metric: took 33.006648417s for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.311734  285878 pod_ready.go:83] waiting for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.318072  285878 pod_ready.go:94] pod "etcd-embed-certs-127599" is "Ready"
	I1216 05:05:42.318101  285878 pod_ready.go:86] duration metric: took 6.341713ms for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.320261  285878 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.324744  285878 pod_ready.go:94] pod "kube-apiserver-embed-certs-127599" is "Ready"
	I1216 05:05:42.324772  285878 pod_ready.go:86] duration metric: took 4.487423ms for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.327251  285878 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.506418  285878 pod_ready.go:94] pod "kube-controller-manager-embed-certs-127599" is "Ready"
	I1216 05:05:42.506458  285878 pod_ready.go:86] duration metric: took 179.18035ms for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.706844  285878 pod_ready.go:83] waiting for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.107603  285878 pod_ready.go:94] pod "kube-proxy-j79cw" is "Ready"
	I1216 05:05:43.107634  285878 pod_ready.go:86] duration metric: took 400.764534ms for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.306943  285878 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.706579  285878 pod_ready.go:94] pod "kube-scheduler-embed-certs-127599" is "Ready"
	I1216 05:05:43.706605  285878 pod_ready.go:86] duration metric: took 399.628671ms for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.706621  285878 pod_ready.go:40] duration metric: took 34.408131714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:43.766223  285878 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:05:43.767753  285878 out.go:179] * Done! kubectl is now configured to use "embed-certs-127599" cluster and "default" namespace by default
	I1216 05:05:43.000076  292503 out.go:252]   - Configuring RBAC rules ...
	I1216 05:05:43.000232  292503 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 05:05:43.006930  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 05:05:43.013081  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 05:05:43.015993  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 05:05:43.019821  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 05:05:43.022802  292503 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 05:05:43.361353  292503 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 05:05:43.785608  292503 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 05:05:44.352605  292503 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 05:05:44.353897  292503 kubeadm.go:319] 
	I1216 05:05:44.353997  292503 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 05:05:44.354009  292503 kubeadm.go:319] 
	I1216 05:05:44.354128  292503 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 05:05:44.354140  292503 kubeadm.go:319] 
	I1216 05:05:44.354176  292503 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 05:05:44.354269  292503 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 05:05:44.354347  292503 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 05:05:44.354358  292503 kubeadm.go:319] 
	I1216 05:05:44.354432  292503 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 05:05:44.354442  292503 kubeadm.go:319] 
	I1216 05:05:44.354516  292503 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 05:05:44.354538  292503 kubeadm.go:319] 
	I1216 05:05:44.354607  292503 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 05:05:44.354731  292503 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 05:05:44.354839  292503 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 05:05:44.354850  292503 kubeadm.go:319] 
	I1216 05:05:44.354979  292503 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 05:05:44.355113  292503 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 05:05:44.355125  292503 kubeadm.go:319] 
	I1216 05:05:44.355233  292503 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w2w8nj.tpu6zggrwl1bblxh \
	I1216 05:05:44.355379  292503 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 05:05:44.355418  292503 kubeadm.go:319] 	--control-plane 
	I1216 05:05:44.355425  292503 kubeadm.go:319] 
	I1216 05:05:44.355541  292503 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 05:05:44.355547  292503 kubeadm.go:319] 
	I1216 05:05:44.355652  292503 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w2w8nj.tpu6zggrwl1bblxh \
	I1216 05:05:44.355787  292503 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 05:05:44.359394  292503 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:05:44.359535  292503 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:05:44.359550  292503 cni.go:84] Creating CNI manager for ""
	I1216 05:05:44.359559  292503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:44.361669  292503 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 05:05:44.362987  292503 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 05:05:44.368597  292503 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1216 05:05:44.368615  292503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 05:05:44.386199  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1216 05:05:43.822303  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:45.823500  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:48.320056  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	I1216 05:05:44.892254  292503 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:05:44.892325  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:44.892390  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-418580 minikube.k8s.io/updated_at=2025_12_16T05_05_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=newest-cni-418580 minikube.k8s.io/primary=true
	I1216 05:05:44.905578  292503 ops.go:34] apiserver oom_adj: -16
	I1216 05:05:44.996594  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:45.497238  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:45.997166  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:46.496875  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:46.997245  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:47.497214  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:47.997272  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:48.497203  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:48.996890  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:49.067493  292503 kubeadm.go:1114] duration metric: took 4.175226314s to wait for elevateKubeSystemPrivileges
	I1216 05:05:49.067533  292503 kubeadm.go:403] duration metric: took 13.322133667s to StartCluster
	I1216 05:05:49.067556  292503 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:49.067620  292503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:49.070444  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:49.070849  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:05:49.071407  292503 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:49.072456  292503 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:49.072571  292503 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-418580"
	I1216 05:05:49.072594  292503 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-418580"
	I1216 05:05:49.072627  292503 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:05:49.072680  292503 addons.go:70] Setting default-storageclass=true in profile "newest-cni-418580"
	I1216 05:05:49.072712  292503 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-418580"
	I1216 05:05:49.073075  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.072422  292503 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:49.073196  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.074924  292503 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:49.077139  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:49.098754  292503 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:49.100209  292503 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:49.100235  292503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:49.100297  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:49.101275  292503 addons.go:239] Setting addon default-storageclass=true in "newest-cni-418580"
	I1216 05:05:49.101340  292503 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:05:49.101798  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.131640  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:49.132569  292503 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:49.132591  292503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:49.132686  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:49.173979  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:49.209645  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:05:49.255857  292503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:49.269695  292503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:49.303402  292503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:49.394100  292503 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1216 05:05:49.395894  292503 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:49.395955  292503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:49.590389  292503 api_server.go:72] duration metric: took 517.235446ms to wait for apiserver process to appear ...
	I1216 05:05:49.590414  292503 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:49.590436  292503 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:05:49.596111  292503 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:05:49.597185  292503 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:05:49.597220  292503 api_server.go:131] duration metric: took 6.79936ms to wait for apiserver health ...
	I1216 05:05:49.597229  292503 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:49.600173  292503 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:49.600202  292503 system_pods.go:61] "coredns-7d764666f9-67tdz" [2e7ab1b2-db44-41e7-9601-fe1fb0f149b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:05:49.600210  292503 system_pods.go:61] "etcd-newest-cni-418580" [5345f203-8998-46d2-af9b-9f60ee31c9b1] Running
	I1216 05:05:49.600220  292503 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 05:05:49.600222  292503 system_pods.go:61] "kindnet-btffs" [54b00649-4160-4dab-bce1-83080b927bde] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:49.600237  292503 system_pods.go:61] "kube-apiserver-newest-cni-418580" [922b709d-f979-411d-87d3-1d6e2085c405] Running
	I1216 05:05:49.600308  292503 system_pods.go:61] "kube-controller-manager-newest-cni-418580" [41246201-9b24-43b9-a291-b5d087085e7a] Running
	I1216 05:05:49.600335  292503 system_pods.go:61] "kube-proxy-ls9s9" [dbf8d374-0c38-4c64-ac2c-9a71db8b4223] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:49.600346  292503 system_pods.go:61] "kube-scheduler-newest-cni-418580" [7a507411-b5e7-4bc1-8740-e07e3b476f8d] Running
	I1216 05:05:49.600363  292503 system_pods.go:61] "storage-provisioner" [fa6b12e1-67b9-40bb-93c3-42a32f0d9ed4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:05:49.600373  292503 system_pods.go:74] duration metric: took 3.137662ms to wait for pod list to return data ...
	I1216 05:05:49.600386  292503 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:49.601530  292503 addons.go:530] duration metric: took 529.068992ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:05:49.602559  292503 default_sa.go:45] found service account: "default"
	I1216 05:05:49.602574  292503 default_sa.go:55] duration metric: took 2.179965ms for default service account to be created ...
	I1216 05:05:49.602582  292503 kubeadm.go:587] duration metric: took 529.436023ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 05:05:49.602608  292503 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:49.604976  292503 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:49.605000  292503 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:49.605061  292503 node_conditions.go:105] duration metric: took 2.447077ms to run NodePressure ...
	I1216 05:05:49.605076  292503 start.go:242] waiting for startup goroutines ...
	I1216 05:05:49.899603  292503 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-418580" context rescaled to 1 replicas
	I1216 05:05:49.899646  292503 start.go:247] waiting for cluster config update ...
	I1216 05:05:49.899661  292503 start.go:256] writing updated cluster config ...
	I1216 05:05:49.900004  292503 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:49.957073  292503 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:05:49.959152  292503 out.go:179] * Done! kubectl is now configured to use "newest-cni-418580" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 05:05:21 no-preload-990631 crio[570]: time="2025-12-16T05:05:21.616676599Z" level=info msg="Started container" PID=1730 containerID=d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper id=6bf987c5-81e3-46cb-a7ad-40ff34176571 name=/runtime.v1.RuntimeService/StartContainer sandboxID=21902216ee3f08cee4dd6638eaa53487a32b13a6f37c322b2d5e16aee532d290
	Dec 16 05:05:21 no-preload-990631 crio[570]: time="2025-12-16T05:05:21.652934353Z" level=info msg="Removing container: 97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223" id=fbd65280-2680-4308-863b-e95d050e49ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:21 no-preload-990631 crio[570]: time="2025-12-16T05:05:21.672828479Z" level=info msg="Removed container 97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=fbd65280-2680-4308-863b-e95d050e49ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.678427036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ef9748be-9b6b-4792-837b-59151dbf2744 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.679426495Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f3edaaf4-964e-443c-a103-794eceaa550f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.680710997Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2c17f197-0887-44a1-947e-916c8383de96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.680943703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.684965771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.685134092Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9697cd32fb1a9fe588e49986a6c220003f0d4542da71437a248cdbf16b5276eb/merged/etc/passwd: no such file or directory"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.685169942Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9697cd32fb1a9fe588e49986a6c220003f0d4542da71437a248cdbf16b5276eb/merged/etc/group: no such file or directory"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.686475853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.709284158Z" level=info msg="Created container 75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b: kube-system/storage-provisioner/storage-provisioner" id=2c17f197-0887-44a1-947e-916c8383de96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.709847734Z" level=info msg="Starting container: 75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b" id=f1877710-318e-405b-bb7f-ce1ed40402e5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.711710345Z" level=info msg="Started container" PID=1746 containerID=75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b description=kube-system/storage-provisioner/storage-provisioner id=f1877710-318e-405b-bb7f-ce1ed40402e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93bcbe3bc6e0305e91f945121d05ca3432f0a96e7ebabffaa2085e26d18fb715
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.558992961Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d443444-012d-4615-98b4-9d9d3f29f79e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.559952068Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3812a1d3-61af-49be-aaa7-753a5805a82f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.561120134Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=4166cc8b-a530-47f4-a2bd-241a990081c9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.561282031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.568476684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.569147119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.616645256Z" level=info msg="Created container 20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=4166cc8b-a530-47f4-a2bd-241a990081c9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.617426454Z" level=info msg="Starting container: 20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259" id=e5e48480-5a30-445c-9b16-168c008cfc6d name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.619719906Z" level=info msg="Started container" PID=1781 containerID=20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper id=e5e48480-5a30-445c-9b16-168c008cfc6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=21902216ee3f08cee4dd6638eaa53487a32b13a6f37c322b2d5e16aee532d290
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.769956573Z" level=info msg="Removing container: d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6" id=6ebefe09-607d-40a9-a3de-59cf7b5307a5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.86471471Z" level=info msg="Removed container d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=6ebefe09-607d-40a9-a3de-59cf7b5307a5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	20c8bffe74798       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   21902216ee3f0       dashboard-metrics-scraper-867fb5f87b-grx64   kubernetes-dashboard
	75d5b92ec924e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   93bcbe3bc6e03       storage-provisioner                          kube-system
	9fb7cb0f528cf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   5cdd1b748349e       kubernetes-dashboard-b84665fb8-rvxh6         kubernetes-dashboard
	f0d0dda87b464       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   5ce21e91f094c       busybox                                      default
	190d84165a587       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   022ad7ade8695       coredns-7d764666f9-7l2ds                     kube-system
	e3911104d1e18       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   316ed04058e0b       kindnet-p27s9                                kube-system
	9c81e85ff9b84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   93bcbe3bc6e03       storage-provisioner                          kube-system
	9a90b9a88d9e5       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           51 seconds ago      Running             kube-proxy                  0                   9bc69a9476caa       kube-proxy-2bjbr                             kube-system
	4b4c73e54f123       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           53 seconds ago      Running             kube-scheduler              0                   d63109d4147bd       kube-scheduler-no-preload-990631             kube-system
	c7bb8494fe95f       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           53 seconds ago      Running             kube-controller-manager     0                   73899f1b3a4cf       kube-controller-manager-no-preload-990631    kube-system
	3332f5674ea85       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           53 seconds ago      Running             kube-apiserver              0                   f91fb6993a524       kube-apiserver-no-preload-990631             kube-system
	f9cec7fd839f6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   01bac8788c8e1       etcd-no-preload-990631                       kube-system
	
	
	==> coredns [190d84165a587af2c099887138e64ae3db4eef4492add527746874d69791a022] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33811 - 57392 "HINFO IN 4512378612646338026.8813518048393044569. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016589329s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-990631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-990631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=no-preload-990631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-990631
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:05:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-990631
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                fe9a87bf-3ac0-4eb7-ad47-1e9ad27489f8
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-7l2ds                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-990631                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-p27s9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-990631              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-990631     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-2bjbr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-990631              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-grx64    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rvxh6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-990631 event: Registered Node no-preload-990631 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node no-preload-990631 event: Registered Node no-preload-990631 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9] <==
	{"level":"warn","ts":"2025-12-16T05:04:59.888757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.897920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.904543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.911369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.917921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.924678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.931324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.937599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.944066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.955139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.961399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.967527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.974716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.981192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.989416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.996402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.002778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.009847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.018535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.026549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.042678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.050202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.057483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.064238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.113429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60730","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:05:52 up 48 min,  0 user,  load average: 4.18, 2.98, 2.04
	Linux no-preload-990631 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3911104d1e183d624a37bd5240f1cb4daad3ed819947b773c46def5220cfd0d] <==
	I1216 05:05:01.160674       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:05:01.160949       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 05:05:01.161146       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:05:01.161167       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:05:01.161188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:05:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:05:01.362613       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:05:01.362645       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:05:01.362654       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:05:01.362776       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:05:01.662792       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:05:01.662820       1 metrics.go:72] Registering metrics
	I1216 05:05:01.662886       1 controller.go:711] "Syncing nftables rules"
	I1216 05:05:11.362648       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:11.362703       1 main.go:301] handling current node
	I1216 05:05:21.364051       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:21.364115       1 main.go:301] handling current node
	I1216 05:05:31.362891       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:31.362929       1 main.go:301] handling current node
	I1216 05:05:41.366358       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:41.366388       1 main.go:301] handling current node
	I1216 05:05:51.368126       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:51.368159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5] <==
	I1216 05:05:00.575839       1 aggregator.go:187] initial CRD sync complete...
	I1216 05:05:00.575869       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 05:05:00.575888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:05:00.575845       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:00.575893       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:05:00.576074       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:05:00.576309       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:05:00.585518       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:05:00.589420       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 05:05:00.589559       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:00.593803       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:00.593825       1 policy_source.go:248] refreshing policies
	I1216 05:05:00.603098       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:05:00.604793       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:05:00.820234       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:05:00.856333       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:05:00.882848       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:05:00.895978       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:05:00.959068       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.194.42"}
	I1216 05:05:00.969664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.168.215"}
	I1216 05:05:01.462536       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 05:05:04.109934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:05:04.258443       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:05:04.310583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:05:04.310583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637] <==
	I1216 05:05:03.713831       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:03.713838       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712715       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712643       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712679       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712652       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712784       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712728       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712746       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712816       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712757       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712817       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712475       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712492       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712775       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.715487       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712775       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.715800       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712785       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.719870       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.720783       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:03.812245       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.812267       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 05:05:03.812273       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 05:05:03.821090       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9a90b9a88d9e59d0d5baa394070a6cb12b35ea8f5a34889d1d7055ad38ce3468] <==
	I1216 05:05:00.960059       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:05:01.029729       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:01.130324       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:01.130355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1216 05:05:01.130487       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:05:01.148945       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:05:01.149001       1 server_linux.go:136] "Using iptables Proxier"
	I1216 05:05:01.154000       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:05:01.154346       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 05:05:01.154365       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:01.155791       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:05:01.155819       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:05:01.156264       1 config.go:200] "Starting service config controller"
	I1216 05:05:01.156286       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:05:01.156328       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:05:01.156341       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:05:01.156825       1 config.go:309] "Starting node config controller"
	I1216 05:05:01.156906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:05:01.156941       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:05:01.257146       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:05:01.257163       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:05:01.257199       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5] <==
	I1216 05:04:59.344976       1 serving.go:386] Generated self-signed cert in-memory
	W1216 05:05:00.490238       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:05:00.490278       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:05:00.490291       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:05:00.490301       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:05:00.519562       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1216 05:05:00.519592       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:00.524209       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:00.524650       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:00.524587       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:05:00.524613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:05:00.625811       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 05:05:15 no-preload-990631 kubelet[719]: E1216 05:05:15.483088     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-990631" containerName="kube-controller-manager"
	Dec 16 05:05:17 no-preload-990631 kubelet[719]: E1216 05:05:17.900413     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-990631" containerName="etcd"
	Dec 16 05:05:18 no-preload-990631 kubelet[719]: E1216 05:05:18.641838     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-990631" containerName="etcd"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: E1216 05:05:21.557285     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: I1216 05:05:21.557327     719 scope.go:122] "RemoveContainer" containerID="97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: I1216 05:05:21.651471     719 scope.go:122] "RemoveContainer" containerID="97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: E1216 05:05:21.651733     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: I1216 05:05:21.651766     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: E1216 05:05:21.651955     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-grx64_kubernetes-dashboard(3210284c-350f-4805-ac81-eb9cb0e85476)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" podUID="3210284c-350f-4805-ac81-eb9cb0e85476"
	Dec 16 05:05:30 no-preload-990631 kubelet[719]: E1216 05:05:30.747598     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:30 no-preload-990631 kubelet[719]: I1216 05:05:30.747636     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:30 no-preload-990631 kubelet[719]: E1216 05:05:30.747845     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-grx64_kubernetes-dashboard(3210284c-350f-4805-ac81-eb9cb0e85476)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" podUID="3210284c-350f-4805-ac81-eb9cb0e85476"
	Dec 16 05:05:31 no-preload-990631 kubelet[719]: I1216 05:05:31.677909     719 scope.go:122] "RemoveContainer" containerID="9c81e85ff9b8473632b2161e2625db403f46ed30b89331fc578fdc16bf337529"
	Dec 16 05:05:36 no-preload-990631 kubelet[719]: E1216 05:05:36.432249     719 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7l2ds" containerName="coredns"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: E1216 05:05:44.557683     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: I1216 05:05:44.558269     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: I1216 05:05:44.715839     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: E1216 05:05:44.716102     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: I1216 05:05:44.716139     719 scope.go:122] "RemoveContainer" containerID="20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: E1216 05:05:44.716331     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-grx64_kubernetes-dashboard(3210284c-350f-4805-ac81-eb9cb0e85476)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" podUID="3210284c-350f-4805-ac81-eb9cb0e85476"
	Dec 16 05:05:50 no-preload-990631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:05:50 no-preload-990631 kubelet[719]: I1216 05:05:50.323463     719 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 05:05:50 no-preload-990631 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:05:50 no-preload-990631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:05:50 no-preload-990631 systemd[1]: kubelet.service: Consumed 1.754s CPU time.
	
	
	==> kubernetes-dashboard [9fb7cb0f528cf2b4dfef928c1799678a2d9f7afec43110df5511ad9c2f36a8c5] <==
	2025/12/16 05:05:10 Using namespace: kubernetes-dashboard
	2025/12/16 05:05:10 Using in-cluster config to connect to apiserver
	2025/12/16 05:05:10 Using secret token for csrf signing
	2025/12/16 05:05:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:05:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:05:10 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/16 05:05:10 Generating JWE encryption key
	2025/12/16 05:05:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:05:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:05:11 Initializing JWE encryption key from synchronized object
	2025/12/16 05:05:11 Creating in-cluster Sidecar client
	2025/12/16 05:05:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:11 Serving insecurely on HTTP port: 9090
	2025/12/16 05:05:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:10 Starting overwatch
	
	
	==> storage-provisioner [75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b] <==
	I1216 05:05:31.723635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:05:31.729858       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:05:31.729899       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:05:31.731864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:35.187464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:39.448654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:43.047943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:46.101917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:49.128222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:49.137957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:49.138397       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:05:49.141126       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-990631_e90612c3-9a5c-407b-8e79-d468426d8c3b!
	I1216 05:05:49.141748       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da628a4a-675d-4b94-b84f-777f27acf7a6", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-990631_e90612c3-9a5c-407b-8e79-d468426d8c3b became leader
	W1216 05:05:49.153366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:49.163610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:49.241503       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-990631_e90612c3-9a5c-407b-8e79-d468426d8c3b!
	W1216 05:05:51.167520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:51.172738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:53.176616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:53.210788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9c81e85ff9b8473632b2161e2625db403f46ed30b89331fc578fdc16bf337529] <==
	I1216 05:05:00.923744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:05:30.927056       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-990631 -n no-preload-990631
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-990631 -n no-preload-990631: exit status 2 (360.731124ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-990631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-990631
helpers_test.go:244: (dbg) docker inspect no-preload-990631:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1",
	        "Created": "2025-12-16T05:03:37.47946154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:04:52.509131394Z",
	            "FinishedAt": "2025-12-16T05:04:51.501677195Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/hosts",
	        "LogPath": "/var/lib/docker/containers/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1/ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1-json.log",
	        "Name": "/no-preload-990631",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-990631:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-990631",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac137694b74209455306ae7351d1099c04995950a4372c2ce8ad506f986d3da1",
	                "LowerDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0859190a20faa0ff6793139e9ef8e2230ac5a7c307e4927603e51af8af651f19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-990631",
	                "Source": "/var/lib/docker/volumes/no-preload-990631/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-990631",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-990631",
	                "name.minikube.sigs.k8s.io": "no-preload-990631",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "970ff16c69ac958cb41d090a6f22c31f0474927c9c40baf7961b33aa77fb3342",
	            "SandboxKey": "/var/run/docker/netns/970ff16c69ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-990631": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a201df1b3d2f7b8b13672d056e69c36960ed0846b69e35e8077f1fc3c7bcec36",
	                    "EndpointID": "f836912d023449691945eabde441c3677e9ba6d8a1478fb2157fed55ea5988d9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:6f:39:6b:59:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-990631",
	                        "ac137694b742"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631: exit status 2 (346.273682ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-990631 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-990631 logs -n 25: (1.225702093s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                         │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                                      │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p newest-cni-418580 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:05:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:05:28.619499  293223 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:28.619651  293223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:28.619666  293223 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:28.619672  293223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:28.619982  293223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:28.620585  293223 out.go:368] Setting JSON to false
	I1216 05:05:28.621902  293223 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2880,"bootTime":1765858649,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:05:28.621960  293223 start.go:143] virtualization: kvm guest
	I1216 05:05:28.624132  293223 out.go:179] * [default-k8s-diff-port-375252] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:05:28.625828  293223 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:05:28.625847  293223 notify.go:221] Checking for updates...
	I1216 05:05:28.628337  293223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:05:28.629978  293223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:28.631193  293223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:05:28.632390  293223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:05:28.633511  293223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:05:28.635178  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:28.635713  293223 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:05:28.661922  293223 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:05:28.662051  293223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:28.718273  293223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-16 05:05:28.708319047 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:28.718425  293223 docker.go:319] overlay module found
	I1216 05:05:28.719951  293223 out.go:179] * Using the docker driver based on existing profile
	I1216 05:05:28.721068  293223 start.go:309] selected driver: docker
	I1216 05:05:28.721089  293223 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:28.721191  293223 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:05:28.721731  293223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:28.788963  293223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-16 05:05:28.778467948 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:28.789410  293223 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:28.789448  293223 cni.go:84] Creating CNI manager for ""
	I1216 05:05:28.789523  293223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:28.789569  293223 start.go:353] cluster config:
	{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:28.791508  293223 out.go:179] * Starting "default-k8s-diff-port-375252" primary control-plane node in "default-k8s-diff-port-375252" cluster
	I1216 05:05:28.792954  293223 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:05:28.794079  293223 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	W1216 05:05:25.308136  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:27.308269  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:28.795105  293223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:28.795150  293223 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:05:28.795162  293223 cache.go:65] Caching tarball of preloaded images
	I1216 05:05:28.795205  293223 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:05:28.795272  293223 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:05:28.795286  293223 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:05:28.795400  293223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:05:28.816986  293223 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:05:28.817006  293223 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:05:28.817050  293223 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:05:28.817103  293223 start.go:360] acquireMachinesLock for default-k8s-diff-port-375252: {Name:mkb8cd554afbe4d541db5e350d6207558dad5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:05:28.817174  293223 start.go:364] duration metric: took 39.023µs to acquireMachinesLock for "default-k8s-diff-port-375252"
	I1216 05:05:28.817192  293223 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:05:28.817201  293223 fix.go:54] fixHost starting: 
	I1216 05:05:28.817485  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:28.837506  293223 fix.go:112] recreateIfNeeded on default-k8s-diff-port-375252: state=Stopped err=<nil>
	W1216 05:05:28.837533  293223 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:05:25.049152  292503 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:05:25.049357  292503 start.go:159] libmachine.API.Create for "newest-cni-418580" (driver="docker")
	I1216 05:05:25.049412  292503 client.go:173] LocalClient.Create starting
	I1216 05:05:25.049478  292503 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:05:25.049507  292503 main.go:143] libmachine: Decoding PEM data...
	I1216 05:05:25.049530  292503 main.go:143] libmachine: Parsing certificate...
	I1216 05:05:25.049578  292503 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:05:25.049595  292503 main.go:143] libmachine: Decoding PEM data...
	I1216 05:05:25.049605  292503 main.go:143] libmachine: Parsing certificate...
	I1216 05:05:25.049898  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:05:25.066218  292503 cli_runner.go:211] docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:05:25.066280  292503 network_create.go:284] running [docker network inspect newest-cni-418580] to gather additional debugging logs...
	I1216 05:05:25.066296  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580
	W1216 05:05:25.082484  292503 cli_runner.go:211] docker network inspect newest-cni-418580 returned with exit code 1
	I1216 05:05:25.082512  292503 network_create.go:287] error running [docker network inspect newest-cni-418580]: docker network inspect newest-cni-418580: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-418580 not found
	I1216 05:05:25.082540  292503 network_create.go:289] output of [docker network inspect newest-cni-418580]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-418580 not found
	
	** /stderr **
	I1216 05:05:25.082645  292503 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:25.099947  292503 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:05:25.100678  292503 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:05:25.101416  292503 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:05:25.102201  292503 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4cd001204df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:4c:f5:dc:0a:90} reservation:<nil>}
	I1216 05:05:25.103269  292503 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f240c0}
	I1216 05:05:25.103298  292503 network_create.go:124] attempt to create docker network newest-cni-418580 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 05:05:25.103367  292503 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-418580 newest-cni-418580
	I1216 05:05:25.151692  292503 network_create.go:108] docker network newest-cni-418580 192.168.85.0/24 created
	I1216 05:05:25.151725  292503 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-418580" container
	I1216 05:05:25.151806  292503 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:05:25.168787  292503 cli_runner.go:164] Run: docker volume create newest-cni-418580 --label name.minikube.sigs.k8s.io=newest-cni-418580 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:05:25.186120  292503 oci.go:103] Successfully created a docker volume newest-cni-418580
	I1216 05:05:25.186217  292503 cli_runner.go:164] Run: docker run --rm --name newest-cni-418580-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-418580 --entrypoint /usr/bin/test -v newest-cni-418580:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:05:25.583110  292503 oci.go:107] Successfully prepared a docker volume newest-cni-418580
	I1216 05:05:25.583174  292503 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:05:25.583183  292503 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:05:25.583245  292503 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-418580:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:05:28.338637  292503 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-418580:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (2.755349769s)
	I1216 05:05:28.338666  292503 kic.go:203] duration metric: took 2.755479724s to extract preloaded images to volume ...
	W1216 05:05:28.338763  292503 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:05:28.338816  292503 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:05:28.338866  292503 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:05:28.403669  292503 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-418580 --name newest-cni-418580 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-418580 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-418580 --network newest-cni-418580 --ip 192.168.85.2 --volume newest-cni-418580:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 05:05:28.722054  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Running}}
	I1216 05:05:28.742885  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.766670  292503 cli_runner.go:164] Run: docker exec newest-cni-418580 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:05:28.817325  292503 oci.go:144] the created container "newest-cni-418580" has a running status.
	I1216 05:05:28.817353  292503 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa...
	I1216 05:05:28.858625  292503 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:05:28.890286  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.914645  292503 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:05:28.914674  292503 kic_runner.go:114] Args: [docker exec --privileged newest-cni-418580 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:05:28.970557  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.995784  292503 machine.go:94] provisionDockerMachine start ...
	I1216 05:05:28.995879  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:29.018594  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:29.018845  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:29.018853  292503 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:05:29.019705  292503 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49150->127.0.0.1:33097: read: connection reset by peer
	W1216 05:05:29.106433  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:31.606893  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:28.839266  293223 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-375252" ...
	I1216 05:05:28.839325  293223 cli_runner.go:164] Run: docker start default-k8s-diff-port-375252
	I1216 05:05:29.113119  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:29.132662  293223 kic.go:430] container "default-k8s-diff-port-375252" state is running.
	I1216 05:05:29.133094  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:29.152639  293223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:05:29.152879  293223 machine.go:94] provisionDockerMachine start ...
	I1216 05:05:29.152970  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:29.171277  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:29.171536  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:29.171549  293223 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:05:29.172205  293223 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51300->127.0.0.1:33103: read: connection reset by peer
	I1216 05:05:32.297657  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:05:32.297684  293223 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-375252"
	I1216 05:05:32.297751  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.318798  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.319051  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.319069  293223 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-375252 && echo "default-k8s-diff-port-375252" | sudo tee /etc/hostname
	I1216 05:05:32.456847  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:05:32.456922  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.476706  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.477045  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.477072  293223 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-375252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-375252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-375252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:32.606389  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:32.606417  293223 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:32.606439  293223 ubuntu.go:190] setting up certificates
	I1216 05:05:32.606452  293223 provision.go:84] configureAuth start
	I1216 05:05:32.606505  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:32.627629  293223 provision.go:143] copyHostCerts
	I1216 05:05:32.627685  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:32.627701  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:32.627749  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:32.627836  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:32.627845  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:32.627865  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:32.627913  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:32.627921  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:32.627939  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:32.627997  293223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-375252 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-375252 localhost minikube]
	I1216 05:05:32.730496  293223 provision.go:177] copyRemoteCerts
	I1216 05:05:32.730565  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:32.730600  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.751146  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:32.843812  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:32.861358  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 05:05:32.879619  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:05:32.898030  293223 provision.go:87] duration metric: took 291.544808ms to configureAuth
	I1216 05:05:32.898063  293223 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:32.898285  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:32.898408  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.918526  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.918833  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.918878  293223 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:33.236179  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:33.236212  293223 machine.go:97] duration metric: took 4.083315908s to provisionDockerMachine
	I1216 05:05:33.236229  293223 start.go:293] postStartSetup for "default-k8s-diff-port-375252" (driver="docker")
	I1216 05:05:33.236246  293223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:33.236317  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:33.236391  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.255124  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.349976  293223 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:33.353576  293223 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:33.353609  293223 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:33.353622  293223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:33.353677  293223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:33.353784  293223 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:33.353908  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:33.361552  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:33.379312  293223 start.go:296] duration metric: took 143.06602ms for postStartSetup
	I1216 05:05:33.379398  293223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:33.379447  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.400517  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.491075  293223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:33.495538  293223 fix.go:56] duration metric: took 4.678331684s for fixHost
	I1216 05:05:33.495565  293223 start.go:83] releasing machines lock for "default-k8s-diff-port-375252", held for 4.678379719s
	I1216 05:05:33.495640  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:33.516725  293223 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:33.516805  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.516816  293223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:33.516895  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.536965  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.537322  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	W1216 05:05:29.308470  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:31.807551  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:32.147295  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-418580
	
	I1216 05:05:32.147323  292503 ubuntu.go:182] provisioning hostname "newest-cni-418580"
	I1216 05:05:32.147387  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.165073  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.165297  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.165310  292503 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-418580 && echo "newest-cni-418580" | sudo tee /etc/hostname
	I1216 05:05:32.302827  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-418580
	
	I1216 05:05:32.302918  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.322729  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.323008  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.323056  292503 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-418580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-418580/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-418580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:32.450471  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:32.450502  292503 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:32.450536  292503 ubuntu.go:190] setting up certificates
	I1216 05:05:32.450556  292503 provision.go:84] configureAuth start
	I1216 05:05:32.450612  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:32.471395  292503 provision.go:143] copyHostCerts
	I1216 05:05:32.471462  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:32.471473  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:32.471555  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:32.471711  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:32.471728  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:32.471774  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:32.471881  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:32.471892  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:32.471939  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:32.472109  292503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-418580 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-418580]
	I1216 05:05:32.499918  292503 provision.go:177] copyRemoteCerts
	I1216 05:05:32.499967  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:32.499999  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.518449  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:32.614228  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:32.635678  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:05:32.653888  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:05:32.672293  292503 provision.go:87] duration metric: took 221.714395ms to configureAuth
	I1216 05:05:32.672321  292503 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:32.672501  292503 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:32.672618  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.693226  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.693545  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.693567  292503 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:32.968063  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:32.968093  292503 machine.go:97] duration metric: took 3.972285388s to provisionDockerMachine
	I1216 05:05:32.968105  292503 client.go:176] duration metric: took 7.918682906s to LocalClient.Create
	I1216 05:05:32.968125  292503 start.go:167] duration metric: took 7.918767525s to libmachine.API.Create "newest-cni-418580"
	I1216 05:05:32.968137  292503 start.go:293] postStartSetup for "newest-cni-418580" (driver="docker")
	I1216 05:05:32.968155  292503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:32.968222  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:32.968275  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.987417  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.081927  292503 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:33.086603  292503 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:33.086638  292503 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:33.086652  292503 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:33.086706  292503 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:33.086801  292503 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:33.086920  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:33.095668  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:33.117594  292503 start.go:296] duration metric: took 149.443086ms for postStartSetup
	I1216 05:05:33.117910  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:33.136597  292503 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/config.json ...
	I1216 05:05:33.136935  292503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:33.136989  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.157818  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.249294  292503 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:33.254260  292503 start.go:128] duration metric: took 8.207842149s to createHost
	I1216 05:05:33.254284  292503 start.go:83] releasing machines lock for "newest-cni-418580", held for 8.207977773s
	I1216 05:05:33.254363  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:33.272954  292503 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:33.273028  292503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:33.273042  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.273100  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.291480  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.293050  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.382769  292503 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:33.442424  292503 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:33.477702  292503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:33.482676  292503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:33.482740  292503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:33.509906  292503 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:05:33.509941  292503 start.go:496] detecting cgroup driver to use...
	I1216 05:05:33.509974  292503 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:33.510045  292503 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:33.529423  292503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:33.545150  292503 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:33.545215  292503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:33.563032  292503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:33.581565  292503 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:33.666314  292503 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:33.758723  292503 docker.go:234] disabling docker service ...
	I1216 05:05:33.758784  292503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:33.777100  292503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:33.790420  292503 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:33.893910  292503 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:33.985401  292503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:33.997878  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:34.012256  292503 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:34.012314  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.022649  292503 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:34.022700  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.031562  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.042658  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.053710  292503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:34.062165  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.070697  292503 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.083923  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.092579  292503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:34.099986  292503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:34.107536  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.189571  292503 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:34.334382  292503 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:34.334464  292503 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:34.338658  292503 start.go:564] Will wait 60s for crictl version
	I1216 05:05:34.338717  292503 ssh_runner.go:195] Run: which crictl
	I1216 05:05:34.342328  292503 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:34.366604  292503 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:34.366695  292503 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.396642  292503 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.433283  292503 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 05:05:34.435360  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:34.453408  292503 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:34.457897  292503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.470954  292503 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 05:05:33.681706  293223 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:33.688766  293223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:33.730139  293223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:33.735582  293223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:33.735658  293223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:33.743577  293223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:05:33.743600  293223 start.go:496] detecting cgroup driver to use...
	I1216 05:05:33.743629  293223 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:33.743661  293223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:33.757282  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:33.769587  293223 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:33.769641  293223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:33.785195  293223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:33.797271  293223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:33.890845  293223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:33.981546  293223 docker.go:234] disabling docker service ...
	I1216 05:05:33.981625  293223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:33.997128  293223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:34.009761  293223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:34.091773  293223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:34.181978  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:34.195138  293223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:34.209731  293223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:34.209779  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.218732  293223 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:34.218793  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.227814  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.236762  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.246292  293223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:34.254357  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.262928  293223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.271401  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.280072  293223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:34.288086  293223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:34.295663  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.380378  293223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:34.524358  293223 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:34.524440  293223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:34.528707  293223 start.go:564] Will wait 60s for crictl version
	I1216 05:05:34.528766  293223 ssh_runner.go:195] Run: which crictl
	I1216 05:05:34.532850  293223 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:34.559122  293223 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:34.559232  293223 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.588698  293223 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.622232  293223 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:05:34.472170  292503 kubeadm.go:884] updating cluster {Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:34.472345  292503 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:05:34.472410  292503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.506214  292503 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.506240  292503 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:34.506293  292503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.533288  292503 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.533310  292503 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:34.533332  292503 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:05:34.533435  292503 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-418580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:34.533515  292503 ssh_runner.go:195] Run: crio config
	I1216 05:05:34.582963  292503 cni.go:84] Creating CNI manager for ""
	I1216 05:05:34.582990  292503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:34.583009  292503 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 05:05:34.583050  292503 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-418580 NodeName:newest-cni-418580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:34.583227  292503 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-418580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:34.583301  292503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:05:34.592247  292503 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:34.592309  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:34.600451  292503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 05:05:34.615740  292503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:05:34.631629  292503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 05:05:34.645966  292503 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:34.649983  292503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.660983  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.754997  292503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:34.777628  292503 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580 for IP: 192.168.85.2
	I1216 05:05:34.777642  292503 certs.go:195] generating shared ca certs ...
	I1216 05:05:34.777658  292503 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.777857  292503 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:34.777914  292503 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:34.777931  292503 certs.go:257] generating profile certs ...
	I1216 05:05:34.778003  292503 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key
	I1216 05:05:34.778143  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt with IP's: []
	I1216 05:05:34.623929  293223 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:34.642569  293223 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:34.646797  293223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.657490  293223 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:34.657635  293223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:34.657700  293223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.694359  293223 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.694383  293223 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:34.694449  293223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.722765  293223 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.722791  293223 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:34.722798  293223 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1216 05:05:34.722885  293223 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-375252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:34.722943  293223 ssh_runner.go:195] Run: crio config
	I1216 05:05:34.768295  293223 cni.go:84] Creating CNI manager for ""
	I1216 05:05:34.768319  293223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:34.768330  293223 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:05:34.768351  293223 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-375252 NodeName:default-k8s-diff-port-375252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:34.768461  293223 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-375252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:34.768522  293223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:05:34.776980  293223 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:34.777057  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:34.784977  293223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1216 05:05:34.799152  293223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:05:34.814226  293223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1216 05:05:34.829713  293223 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:34.833416  293223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.843862  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.923578  293223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:34.953581  293223 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252 for IP: 192.168.76.2
	I1216 05:05:34.953601  293223 certs.go:195] generating shared ca certs ...
	I1216 05:05:34.953616  293223 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.953767  293223 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:34.953838  293223 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:34.953859  293223 certs.go:257] generating profile certs ...
	I1216 05:05:34.953970  293223 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key
	I1216 05:05:34.954080  293223 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f
	I1216 05:05:34.954137  293223 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key
	I1216 05:05:34.954290  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:34.954338  293223 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:34.954352  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:34.954447  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:34.954496  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:34.954531  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:34.954590  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:34.955229  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:34.975322  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:34.994253  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:35.014878  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:35.040118  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 05:05:35.059441  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:05:35.076865  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:35.094221  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:05:35.112726  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:35.130358  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:35.148722  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:35.166552  293223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:35.179253  293223 ssh_runner.go:195] Run: openssl version
	I1216 05:05:35.185464  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.194548  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:35.202787  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.207145  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.207204  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.243264  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:35.250997  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.258211  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:35.265357  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.269279  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.269321  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.304617  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.312873  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.320543  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:35.328266  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.331987  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.332071  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.369465  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:35.377085  293223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:35.380947  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:05:35.422121  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:05:35.462335  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:05:35.508430  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:05:35.559688  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:05:35.615416  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:05:35.663433  293223 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:35.663546  293223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:35.663626  293223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:35.697125  293223 cri.go:89] found id: "3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45"
	I1216 05:05:35.697150  293223 cri.go:89] found id: "fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3"
	I1216 05:05:35.697156  293223 cri.go:89] found id: "bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff"
	I1216 05:05:35.697169  293223 cri.go:89] found id: "5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01"
	I1216 05:05:35.697174  293223 cri.go:89] found id: ""
	I1216 05:05:35.697297  293223 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:05:35.710419  293223 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:35Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:35.710548  293223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:35.718597  293223 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:05:35.718613  293223 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:05:35.718655  293223 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:05:35.727178  293223 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:05:35.728054  293223 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-375252" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:35.728609  293223 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-375252" cluster setting kubeconfig missing "default-k8s-diff-port-375252" context setting]
	I1216 05:05:35.729803  293223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.732044  293223 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:05:35.741439  293223 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 05:05:35.741468  293223 kubeadm.go:602] duration metric: took 22.850213ms to restartPrimaryControlPlane
	I1216 05:05:35.741479  293223 kubeadm.go:403] duration metric: took 78.055706ms to StartCluster
	I1216 05:05:35.741495  293223 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.741561  293223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:35.743663  293223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.743920  293223 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:35.744192  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:35.744255  293223 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:35.744337  293223 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744353  293223 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.744361  293223 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:05:35.744385  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.744837  293223 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744857  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.744859  293223 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744867  293223 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.744878  293223 addons.go:248] addon dashboard should already be in state true
	I1216 05:05:35.744882  293223 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-375252"
	I1216 05:05:35.744905  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.745234  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.745413  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.746307  293223 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:35.747617  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:35.779083  293223 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:05:35.779717  293223 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.779773  293223 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:05:35.779803  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.780704  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.782924  293223 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:05:35.782935  293223 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:34.912976  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt ...
	I1216 05:05:34.913002  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt: {Name:mkf026a43bf34828560cdd861dc967a4da3a0a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.913190  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key ...
	I1216 05:05:34.913205  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key: {Name:mk9f0f25e1ac61a51fd17fa02f026219d2b811d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.913291  292503 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e
	I1216 05:05:34.913325  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1216 05:05:34.970132  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e ...
	I1216 05:05:34.970158  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e: {Name:mk2d44e3904e6a658ff0ee37692a05dfb5d718cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.970338  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e ...
	I1216 05:05:34.970360  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e: {Name:mkca5d8ab902ee32a3562f650b1165f4e10184d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.970489  292503 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt
	I1216 05:05:34.970612  292503 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key
	I1216 05:05:34.970698  292503 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key
	I1216 05:05:34.970726  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt with IP's: []
	I1216 05:05:35.230717  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt ...
	I1216 05:05:35.230762  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt: {Name:mk2bf2e2cf7c916f101948379e3dbb6fdb61a8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.230936  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key ...
	I1216 05:05:35.230951  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key: {Name:mkf0aafa828bb13a430c43eed8a9d89f985fe446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.231223  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:35.231284  292503 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:35.231307  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:35.231346  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:35.231400  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:35.231441  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:35.231498  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:35.232006  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:35.250349  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:35.267912  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:35.285259  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:35.302305  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:05:35.320455  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:05:35.338139  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:35.355217  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:05:35.372674  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:35.396550  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:35.414853  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:35.433117  292503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:35.445803  292503 ssh_runner.go:195] Run: openssl version
	I1216 05:05:35.452761  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.462981  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:35.472831  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.478677  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.478741  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.527788  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:35.538626  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:05:35.549759  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.561348  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:35.569778  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.574227  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.574289  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.628396  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:35.639703  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:05:35.650632  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.659686  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:35.669120  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.673306  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.673366  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.722247  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.731511  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.740868  292503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:35.745336  292503 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:05:35.745401  292503 kubeadm.go:401] StartCluster: {Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:35.745487  292503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:35.746058  292503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:35.801678  292503 cri.go:89] found id: ""
	I1216 05:05:35.801751  292503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:35.816616  292503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:05:35.828849  292503 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:05:35.828906  292503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:05:35.849259  292503 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:05:35.849285  292503 kubeadm.go:158] found existing configuration files:
	
	I1216 05:05:35.849332  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:05:35.861581  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:05:35.861651  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:05:35.872359  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:05:35.882005  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:05:35.882094  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:05:35.892470  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:05:35.903276  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:05:35.903332  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:05:35.910839  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:05:35.918764  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:05:35.918817  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:05:35.927327  292503 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:05:35.983497  292503 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:05:35.983639  292503 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:05:36.078326  292503 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:05:36.078423  292503 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:05:36.078478  292503 kubeadm.go:319] OS: Linux
	I1216 05:05:36.078538  292503 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:05:36.078632  292503 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:05:36.078725  292503 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:05:36.078801  292503 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:05:36.078868  292503 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:05:36.078935  292503 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:05:36.079014  292503 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:05:36.079098  292503 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:05:36.155571  292503 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:05:36.155718  292503 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:05:36.155845  292503 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:05:36.166440  292503 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 05:05:34.106689  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:36.109040  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:36.607281  284202 pod_ready.go:94] pod "coredns-7d764666f9-7l2ds" is "Ready"
	I1216 05:05:36.607311  284202 pod_ready.go:86] duration metric: took 34.506370037s for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.609962  284202 pod_ready.go:83] waiting for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.613794  284202 pod_ready.go:94] pod "etcd-no-preload-990631" is "Ready"
	I1216 05:05:36.613816  284202 pod_ready.go:86] duration metric: took 3.832267ms for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.615814  284202 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.619531  284202 pod_ready.go:94] pod "kube-apiserver-no-preload-990631" is "Ready"
	I1216 05:05:36.619551  284202 pod_ready.go:86] duration metric: took 3.7182ms for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.621267  284202 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.805520  284202 pod_ready.go:94] pod "kube-controller-manager-no-preload-990631" is "Ready"
	I1216 05:05:36.805552  284202 pod_ready.go:86] duration metric: took 184.26649ms for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.004943  284202 pod_ready.go:83] waiting for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.405104  284202 pod_ready.go:94] pod "kube-proxy-2bjbr" is "Ready"
	I1216 05:05:37.405148  284202 pod_ready.go:86] duration metric: took 400.175347ms for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.605530  284202 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:38.005242  284202 pod_ready.go:94] pod "kube-scheduler-no-preload-990631" is "Ready"
	I1216 05:05:38.005269  284202 pod_ready.go:86] duration metric: took 399.71845ms for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:38.005280  284202 pod_ready.go:40] duration metric: took 35.907405331s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:38.078833  284202 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:05:38.083862  284202 out.go:179] * Done! kubectl is now configured to use "no-preload-990631" cluster and "default" namespace by default
	I1216 05:05:36.168529  292503 out.go:252]   - Generating certificates and keys ...
	I1216 05:05:36.168629  292503 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:05:36.168732  292503 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:05:36.271180  292503 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:05:36.355639  292503 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:05:36.514877  292503 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:05:36.614071  292503 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:05:36.827128  292503 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:05:36.827351  292503 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-418580] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:05:36.974476  292503 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:05:36.974620  292503 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-418580] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:05:37.026221  292503 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 05:05:37.227397  292503 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 05:05:37.493691  292503 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 05:05:37.493853  292503 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:05:37.662278  292503 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:05:37.748294  292503 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:05:37.901537  292503 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:05:38.083466  292503 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:05:38.279485  292503 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:05:38.280369  292503 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:05:38.285287  292503 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:05:35.784693  293223 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:35.784717  293223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:35.784735  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:05:35.784752  293223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:05:35.784789  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.784814  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.819593  293223 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:35.819612  293223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:35.819664  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.819933  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.823853  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.844792  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.921834  293223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:35.938442  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:35.941483  293223 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-375252" to be "Ready" ...
	I1216 05:05:35.950146  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:05:35.950165  293223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:05:35.959892  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:35.969036  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:05:35.969063  293223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:05:35.989078  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:05:35.989103  293223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:05:36.011127  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:05:36.011150  293223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:05:36.031446  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:05:36.031478  293223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:05:36.045414  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:05:36.045444  293223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:05:36.064565  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:05:36.064589  293223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:05:36.081324  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:05:36.081348  293223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:05:36.095008  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:36.095044  293223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:05:36.112771  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:38.170264  293223 node_ready.go:49] node "default-k8s-diff-port-375252" is "Ready"
	I1216 05:05:38.170298  293223 node_ready.go:38] duration metric: took 2.228781477s for node "default-k8s-diff-port-375252" to be "Ready" ...
	I1216 05:05:38.170312  293223 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:38.170360  293223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:38.776267  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.837791808s)
	I1216 05:05:38.776306  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.816384453s)
	I1216 05:05:38.776428  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.663623652s)
	I1216 05:05:38.776458  293223 api_server.go:72] duration metric: took 3.032511707s to wait for apiserver process to appear ...
	I1216 05:05:38.776471  293223 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:38.776496  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:38.778346  293223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-375252 addons enable metrics-server
	
	I1216 05:05:38.781799  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:38.781820  293223 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:38.784278  293223 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1216 05:05:33.808006  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:35.812587  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:38.308806  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:38.287293  292503 out.go:252]   - Booting up control plane ...
	I1216 05:05:38.287409  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:05:38.287505  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:05:38.287591  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:05:38.306562  292503 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:05:38.306822  292503 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:05:38.315641  292503 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:05:38.317230  292503 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:05:38.317321  292503 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:05:38.439332  292503 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:05:38.439522  292503 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:05:38.940884  292503 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.794985ms
	I1216 05:05:38.943858  292503 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 05:05:38.943984  292503 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 05:05:38.944138  292503 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 05:05:38.944228  292503 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 05:05:39.947945  292503 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004000869s
	I1216 05:05:41.136529  292503 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.192438136s
	I1216 05:05:42.945750  292503 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001716849s
	I1216 05:05:42.965946  292503 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 05:05:42.978062  292503 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 05:05:42.988785  292503 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 05:05:42.989107  292503 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-418580 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 05:05:42.998633  292503 kubeadm.go:319] [bootstrap-token] Using token: w2w8nj.tpu6zggrwl1bblxh
	I1216 05:05:38.785917  293223 addons.go:530] duration metric: took 3.04166463s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:05:39.277101  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:39.282770  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:39.282808  293223 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:39.777190  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:39.781241  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1216 05:05:39.782224  293223 api_server.go:141] control plane version: v1.34.2
	I1216 05:05:39.782245  293223 api_server.go:131] duration metric: took 1.005765809s to wait for apiserver health ...
	I1216 05:05:39.782254  293223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:39.785805  293223 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:39.785846  293223 system_pods.go:61] "coredns-66bc5c9577-65x5z" [8c610230-fb3c-4bef-a6d3-8870276ed93e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:39.785855  293223 system_pods.go:61] "etcd-default-k8s-diff-port-375252" [c04dc1e6-e72a-4915-8ed7-4976b69962c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:39.785870  293223 system_pods.go:61] "kindnet-xml94" [2f0ff435-e0ff-441e-bdf8-0cf9a6da784c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:39.785882  293223 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-375252" [51755996-a6d5-4427-8ecf-d5635d5bae94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:39.785897  293223 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-375252" [01874cfa-681a-48b1-8378-2f33c21185ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:39.785908  293223 system_pods.go:61] "kube-proxy-dmw9t" [c72d3d28-eaa9-4a35-b7b9-5207edf40372] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:39.785916  293223 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-375252" [e1c1c032-6d2a-4ca9-bf67-137e7f061874] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:39.785927  293223 system_pods.go:61] "storage-provisioner" [e819c1d3-f079-4b3e-a4b1-9d2cfbf37344] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:39.785934  293223 system_pods.go:74] duration metric: took 3.674729ms to wait for pod list to return data ...
	I1216 05:05:39.785950  293223 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:39.788192  293223 default_sa.go:45] found service account: "default"
	I1216 05:05:39.788211  293223 default_sa.go:55] duration metric: took 2.254358ms for default service account to be created ...
	I1216 05:05:39.788220  293223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:39.790710  293223 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:39.790733  293223 system_pods.go:89] "coredns-66bc5c9577-65x5z" [8c610230-fb3c-4bef-a6d3-8870276ed93e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:39.790755  293223 system_pods.go:89] "etcd-default-k8s-diff-port-375252" [c04dc1e6-e72a-4915-8ed7-4976b69962c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:39.790763  293223 system_pods.go:89] "kindnet-xml94" [2f0ff435-e0ff-441e-bdf8-0cf9a6da784c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:39.790768  293223 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-375252" [51755996-a6d5-4427-8ecf-d5635d5bae94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:39.790775  293223 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-375252" [01874cfa-681a-48b1-8378-2f33c21185ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:39.790782  293223 system_pods.go:89] "kube-proxy-dmw9t" [c72d3d28-eaa9-4a35-b7b9-5207edf40372] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:39.790788  293223 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-375252" [e1c1c032-6d2a-4ca9-bf67-137e7f061874] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:39.790796  293223 system_pods.go:89] "storage-provisioner" [e819c1d3-f079-4b3e-a4b1-9d2cfbf37344] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:39.790802  293223 system_pods.go:126] duration metric: took 2.577172ms to wait for k8s-apps to be running ...
	I1216 05:05:39.790808  293223 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:39.790842  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:39.803851  293223 system_svc.go:56] duration metric: took 13.033531ms WaitForService to wait for kubelet
	I1216 05:05:39.803883  293223 kubeadm.go:587] duration metric: took 4.059935198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:39.803905  293223 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:39.807402  293223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:39.807433  293223 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:39.807450  293223 node_conditions.go:105] duration metric: took 3.538982ms to run NodePressure ...
	I1216 05:05:39.807465  293223 start.go:242] waiting for startup goroutines ...
	I1216 05:05:39.807473  293223 start.go:247] waiting for cluster config update ...
	I1216 05:05:39.807485  293223 start.go:256] writing updated cluster config ...
	I1216 05:05:39.807774  293223 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:39.811763  293223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:39.815359  293223 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-65x5z" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:41.820227  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:40.310699  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:42.309120  285878 pod_ready.go:94] pod "coredns-66bc5c9577-47h64" is "Ready"
	I1216 05:05:42.309150  285878 pod_ready.go:86] duration metric: took 33.006648417s for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.311734  285878 pod_ready.go:83] waiting for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.318072  285878 pod_ready.go:94] pod "etcd-embed-certs-127599" is "Ready"
	I1216 05:05:42.318101  285878 pod_ready.go:86] duration metric: took 6.341713ms for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.320261  285878 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.324744  285878 pod_ready.go:94] pod "kube-apiserver-embed-certs-127599" is "Ready"
	I1216 05:05:42.324772  285878 pod_ready.go:86] duration metric: took 4.487423ms for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.327251  285878 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.506418  285878 pod_ready.go:94] pod "kube-controller-manager-embed-certs-127599" is "Ready"
	I1216 05:05:42.506458  285878 pod_ready.go:86] duration metric: took 179.18035ms for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.706844  285878 pod_ready.go:83] waiting for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.107603  285878 pod_ready.go:94] pod "kube-proxy-j79cw" is "Ready"
	I1216 05:05:43.107634  285878 pod_ready.go:86] duration metric: took 400.764534ms for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.306943  285878 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.706579  285878 pod_ready.go:94] pod "kube-scheduler-embed-certs-127599" is "Ready"
	I1216 05:05:43.706605  285878 pod_ready.go:86] duration metric: took 399.628671ms for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.706621  285878 pod_ready.go:40] duration metric: took 34.408131714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:43.766223  285878 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:05:43.767753  285878 out.go:179] * Done! kubectl is now configured to use "embed-certs-127599" cluster and "default" namespace by default
	I1216 05:05:43.000076  292503 out.go:252]   - Configuring RBAC rules ...
	I1216 05:05:43.000232  292503 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 05:05:43.006930  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 05:05:43.013081  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 05:05:43.015993  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 05:05:43.019821  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 05:05:43.022802  292503 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 05:05:43.361353  292503 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 05:05:43.785608  292503 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 05:05:44.352605  292503 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 05:05:44.353897  292503 kubeadm.go:319] 
	I1216 05:05:44.353997  292503 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 05:05:44.354009  292503 kubeadm.go:319] 
	I1216 05:05:44.354128  292503 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 05:05:44.354140  292503 kubeadm.go:319] 
	I1216 05:05:44.354176  292503 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 05:05:44.354269  292503 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 05:05:44.354347  292503 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 05:05:44.354358  292503 kubeadm.go:319] 
	I1216 05:05:44.354432  292503 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 05:05:44.354442  292503 kubeadm.go:319] 
	I1216 05:05:44.354516  292503 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 05:05:44.354538  292503 kubeadm.go:319] 
	I1216 05:05:44.354607  292503 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 05:05:44.354731  292503 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 05:05:44.354839  292503 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 05:05:44.354850  292503 kubeadm.go:319] 
	I1216 05:05:44.354979  292503 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 05:05:44.355113  292503 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 05:05:44.355125  292503 kubeadm.go:319] 
	I1216 05:05:44.355233  292503 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w2w8nj.tpu6zggrwl1bblxh \
	I1216 05:05:44.355379  292503 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 05:05:44.355418  292503 kubeadm.go:319] 	--control-plane 
	I1216 05:05:44.355425  292503 kubeadm.go:319] 
	I1216 05:05:44.355541  292503 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 05:05:44.355547  292503 kubeadm.go:319] 
	I1216 05:05:44.355652  292503 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w2w8nj.tpu6zggrwl1bblxh \
	I1216 05:05:44.355787  292503 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 05:05:44.359394  292503 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:05:44.359535  292503 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:05:44.359550  292503 cni.go:84] Creating CNI manager for ""
	I1216 05:05:44.359559  292503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:44.361669  292503 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 05:05:44.362987  292503 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 05:05:44.368597  292503 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1216 05:05:44.368615  292503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 05:05:44.386199  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1216 05:05:43.822303  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:45.823500  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:48.320056  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	I1216 05:05:44.892254  292503 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:05:44.892325  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:44.892390  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-418580 minikube.k8s.io/updated_at=2025_12_16T05_05_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=newest-cni-418580 minikube.k8s.io/primary=true
	I1216 05:05:44.905578  292503 ops.go:34] apiserver oom_adj: -16
	I1216 05:05:44.996594  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:45.497238  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:45.997166  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:46.496875  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:46.997245  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:47.497214  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:47.997272  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:48.497203  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:48.996890  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:49.067493  292503 kubeadm.go:1114] duration metric: took 4.175226314s to wait for elevateKubeSystemPrivileges
	I1216 05:05:49.067533  292503 kubeadm.go:403] duration metric: took 13.322133667s to StartCluster
	I1216 05:05:49.067556  292503 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:49.067620  292503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:49.070444  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:49.070849  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:05:49.071407  292503 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:49.072456  292503 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:49.072571  292503 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-418580"
	I1216 05:05:49.072594  292503 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-418580"
	I1216 05:05:49.072627  292503 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:05:49.072680  292503 addons.go:70] Setting default-storageclass=true in profile "newest-cni-418580"
	I1216 05:05:49.072712  292503 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-418580"
	I1216 05:05:49.073075  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.072422  292503 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:49.073196  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.074924  292503 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:49.077139  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:49.098754  292503 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:49.100209  292503 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:49.100235  292503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:49.100297  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:49.101275  292503 addons.go:239] Setting addon default-storageclass=true in "newest-cni-418580"
	I1216 05:05:49.101340  292503 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:05:49.101798  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.131640  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:49.132569  292503 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:49.132591  292503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:49.132686  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:49.173979  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:49.209645  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:05:49.255857  292503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:49.269695  292503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:49.303402  292503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:49.394100  292503 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1216 05:05:49.395894  292503 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:49.395955  292503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:49.590389  292503 api_server.go:72] duration metric: took 517.235446ms to wait for apiserver process to appear ...
	I1216 05:05:49.590414  292503 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:49.590436  292503 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:05:49.596111  292503 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:05:49.597185  292503 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:05:49.597220  292503 api_server.go:131] duration metric: took 6.79936ms to wait for apiserver health ...
	I1216 05:05:49.597229  292503 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:49.600173  292503 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:49.600202  292503 system_pods.go:61] "coredns-7d764666f9-67tdz" [2e7ab1b2-db44-41e7-9601-fe1fb0f149b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:05:49.600210  292503 system_pods.go:61] "etcd-newest-cni-418580" [5345f203-8998-46d2-af9b-9f60ee31c9b1] Running
	I1216 05:05:49.600220  292503 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 05:05:49.600222  292503 system_pods.go:61] "kindnet-btffs" [54b00649-4160-4dab-bce1-83080b927bde] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:49.600237  292503 system_pods.go:61] "kube-apiserver-newest-cni-418580" [922b709d-f979-411d-87d3-1d6e2085c405] Running
	I1216 05:05:49.600308  292503 system_pods.go:61] "kube-controller-manager-newest-cni-418580" [41246201-9b24-43b9-a291-b5d087085e7a] Running
	I1216 05:05:49.600335  292503 system_pods.go:61] "kube-proxy-ls9s9" [dbf8d374-0c38-4c64-ac2c-9a71db8b4223] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:49.600346  292503 system_pods.go:61] "kube-scheduler-newest-cni-418580" [7a507411-b5e7-4bc1-8740-e07e3b476f8d] Running
	I1216 05:05:49.600363  292503 system_pods.go:61] "storage-provisioner" [fa6b12e1-67b9-40bb-93c3-42a32f0d9ed4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:05:49.600373  292503 system_pods.go:74] duration metric: took 3.137662ms to wait for pod list to return data ...
	I1216 05:05:49.600386  292503 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:49.601530  292503 addons.go:530] duration metric: took 529.068992ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:05:49.602559  292503 default_sa.go:45] found service account: "default"
	I1216 05:05:49.602574  292503 default_sa.go:55] duration metric: took 2.179965ms for default service account to be created ...
	I1216 05:05:49.602582  292503 kubeadm.go:587] duration metric: took 529.436023ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 05:05:49.602608  292503 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:49.604976  292503 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:49.605000  292503 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:49.605061  292503 node_conditions.go:105] duration metric: took 2.447077ms to run NodePressure ...
	I1216 05:05:49.605076  292503 start.go:242] waiting for startup goroutines ...
	I1216 05:05:49.899603  292503 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-418580" context rescaled to 1 replicas
	I1216 05:05:49.899646  292503 start.go:247] waiting for cluster config update ...
	I1216 05:05:49.899661  292503 start.go:256] writing updated cluster config ...
	I1216 05:05:49.900004  292503 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:49.957073  292503 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:05:49.959152  292503 out.go:179] * Done! kubectl is now configured to use "newest-cni-418580" cluster and "default" namespace by default
	W1216 05:05:50.321050  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:52.321791  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 16 05:05:21 no-preload-990631 crio[570]: time="2025-12-16T05:05:21.616676599Z" level=info msg="Started container" PID=1730 containerID=d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper id=6bf987c5-81e3-46cb-a7ad-40ff34176571 name=/runtime.v1.RuntimeService/StartContainer sandboxID=21902216ee3f08cee4dd6638eaa53487a32b13a6f37c322b2d5e16aee532d290
	Dec 16 05:05:21 no-preload-990631 crio[570]: time="2025-12-16T05:05:21.652934353Z" level=info msg="Removing container: 97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223" id=fbd65280-2680-4308-863b-e95d050e49ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:21 no-preload-990631 crio[570]: time="2025-12-16T05:05:21.672828479Z" level=info msg="Removed container 97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=fbd65280-2680-4308-863b-e95d050e49ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.678427036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ef9748be-9b6b-4792-837b-59151dbf2744 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.679426495Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f3edaaf4-964e-443c-a103-794eceaa550f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.680710997Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2c17f197-0887-44a1-947e-916c8383de96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.680943703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.684965771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.685134092Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9697cd32fb1a9fe588e49986a6c220003f0d4542da71437a248cdbf16b5276eb/merged/etc/passwd: no such file or directory"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.685169942Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9697cd32fb1a9fe588e49986a6c220003f0d4542da71437a248cdbf16b5276eb/merged/etc/group: no such file or directory"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.686475853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.709284158Z" level=info msg="Created container 75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b: kube-system/storage-provisioner/storage-provisioner" id=2c17f197-0887-44a1-947e-916c8383de96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.709847734Z" level=info msg="Starting container: 75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b" id=f1877710-318e-405b-bb7f-ce1ed40402e5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:31 no-preload-990631 crio[570]: time="2025-12-16T05:05:31.711710345Z" level=info msg="Started container" PID=1746 containerID=75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b description=kube-system/storage-provisioner/storage-provisioner id=f1877710-318e-405b-bb7f-ce1ed40402e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93bcbe3bc6e0305e91f945121d05ca3432f0a96e7ebabffaa2085e26d18fb715
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.558992961Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d443444-012d-4615-98b4-9d9d3f29f79e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.559952068Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3812a1d3-61af-49be-aaa7-753a5805a82f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.561120134Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=4166cc8b-a530-47f4-a2bd-241a990081c9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.561282031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.568476684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.569147119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.616645256Z" level=info msg="Created container 20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=4166cc8b-a530-47f4-a2bd-241a990081c9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.617426454Z" level=info msg="Starting container: 20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259" id=e5e48480-5a30-445c-9b16-168c008cfc6d name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.619719906Z" level=info msg="Started container" PID=1781 containerID=20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper id=e5e48480-5a30-445c-9b16-168c008cfc6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=21902216ee3f08cee4dd6638eaa53487a32b13a6f37c322b2d5e16aee532d290
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.769956573Z" level=info msg="Removing container: d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6" id=6ebefe09-607d-40a9-a3de-59cf7b5307a5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:44 no-preload-990631 crio[570]: time="2025-12-16T05:05:44.86471471Z" level=info msg="Removed container d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64/dashboard-metrics-scraper" id=6ebefe09-607d-40a9-a3de-59cf7b5307a5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	20c8bffe74798       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   21902216ee3f0       dashboard-metrics-scraper-867fb5f87b-grx64   kubernetes-dashboard
	75d5b92ec924e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   93bcbe3bc6e03       storage-provisioner                          kube-system
	9fb7cb0f528cf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   5cdd1b748349e       kubernetes-dashboard-b84665fb8-rvxh6         kubernetes-dashboard
	f0d0dda87b464       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   5ce21e91f094c       busybox                                      default
	190d84165a587       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   022ad7ade8695       coredns-7d764666f9-7l2ds                     kube-system
	e3911104d1e18       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   316ed04058e0b       kindnet-p27s9                                kube-system
	9c81e85ff9b84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   93bcbe3bc6e03       storage-provisioner                          kube-system
	9a90b9a88d9e5       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           54 seconds ago      Running             kube-proxy                  0                   9bc69a9476caa       kube-proxy-2bjbr                             kube-system
	4b4c73e54f123       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           55 seconds ago      Running             kube-scheduler              0                   d63109d4147bd       kube-scheduler-no-preload-990631             kube-system
	c7bb8494fe95f       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           55 seconds ago      Running             kube-controller-manager     0                   73899f1b3a4cf       kube-controller-manager-no-preload-990631    kube-system
	3332f5674ea85       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           55 seconds ago      Running             kube-apiserver              0                   f91fb6993a524       kube-apiserver-no-preload-990631             kube-system
	f9cec7fd839f6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   01bac8788c8e1       etcd-no-preload-990631                       kube-system
	
	
	==> coredns [190d84165a587af2c099887138e64ae3db4eef4492add527746874d69791a022] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33811 - 57392 "HINFO IN 4512378612646338026.8813518048393044569. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016589329s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-990631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-990631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=no-preload-990631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-990631
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:05:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:03:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:05:31 +0000   Tue, 16 Dec 2025 05:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-990631
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                fe9a87bf-3ac0-4eb7-ad47-1e9ad27489f8
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-7l2ds                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-990631                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-p27s9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-990631              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-990631     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-2bjbr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-990631              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-grx64    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-rvxh6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node no-preload-990631 event: Registered Node no-preload-990631 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-990631 event: Registered Node no-preload-990631 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [f9cec7fd839f6e0c5273a0f3e839f84aa3e28f3ba21d352e705a6f3c17d87bd9] <==
	{"level":"warn","ts":"2025-12-16T05:04:59.888757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.897920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.904543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.911369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.917921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.924678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.931324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.937599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.944066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.955139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.961399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.967527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.974716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.981192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.989416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:04:59.996402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.002778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.009847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.018535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.026549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.042678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.050202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.057483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.064238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:00.113429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60730","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:05:55 up 48 min,  0 user,  load average: 4.18, 2.98, 2.04
	Linux no-preload-990631 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3911104d1e183d624a37bd5240f1cb4daad3ed819947b773c46def5220cfd0d] <==
	I1216 05:05:01.160674       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:05:01.160949       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1216 05:05:01.161146       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:05:01.161167       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:05:01.161188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:05:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:05:01.362613       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:05:01.362645       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:05:01.362654       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:05:01.362776       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:05:01.662792       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:05:01.662820       1 metrics.go:72] Registering metrics
	I1216 05:05:01.662886       1 controller.go:711] "Syncing nftables rules"
	I1216 05:05:11.362648       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:11.362703       1 main.go:301] handling current node
	I1216 05:05:21.364051       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:21.364115       1 main.go:301] handling current node
	I1216 05:05:31.362891       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:31.362929       1 main.go:301] handling current node
	I1216 05:05:41.366358       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:41.366388       1 main.go:301] handling current node
	I1216 05:05:51.368126       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1216 05:05:51.368159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3332f5674ea8590f97c6776b7b666398f75a26d82a6f36588091b67934df1dd5] <==
	I1216 05:05:00.575839       1 aggregator.go:187] initial CRD sync complete...
	I1216 05:05:00.575869       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 05:05:00.575888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:05:00.575845       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:00.575893       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:05:00.576074       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:05:00.576309       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:05:00.585518       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:05:00.589420       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 05:05:00.589559       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:00.593803       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:00.593825       1 policy_source.go:248] refreshing policies
	I1216 05:05:00.603098       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:05:00.604793       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:05:00.820234       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:05:00.856333       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:05:00.882848       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:05:00.895978       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:05:00.959068       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.194.42"}
	I1216 05:05:00.969664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.168.215"}
	I1216 05:05:01.462536       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 05:05:04.109934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:05:04.258443       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:05:04.310583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:05:04.310583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c7bb8494fe95f58ef0c8c0f1c32dd0af7c7cc7b113eb924e8c2854b0b292a637] <==
	I1216 05:05:03.713831       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:03.713838       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712715       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712643       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712679       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712652       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712784       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712728       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712746       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712816       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712757       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712817       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712475       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712492       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712775       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.715487       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712775       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.715800       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.712785       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.719870       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.720783       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:03.812245       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:03.812267       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 05:05:03.812273       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 05:05:03.821090       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9a90b9a88d9e59d0d5baa394070a6cb12b35ea8f5a34889d1d7055ad38ce3468] <==
	I1216 05:05:00.960059       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:05:01.029729       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:01.130324       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:01.130355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1216 05:05:01.130487       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:05:01.148945       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:05:01.149001       1 server_linux.go:136] "Using iptables Proxier"
	I1216 05:05:01.154000       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:05:01.154346       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 05:05:01.154365       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:01.155791       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:05:01.155819       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:05:01.156264       1 config.go:200] "Starting service config controller"
	I1216 05:05:01.156286       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:05:01.156328       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:05:01.156341       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:05:01.156825       1 config.go:309] "Starting node config controller"
	I1216 05:05:01.156906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:05:01.156941       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:05:01.257146       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:05:01.257163       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:05:01.257199       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4b4c73e54f123802f62a7af7b9af9fe41679d8d51fd01e105ca758edf669a4a5] <==
	I1216 05:04:59.344976       1 serving.go:386] Generated self-signed cert in-memory
	W1216 05:05:00.490238       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:05:00.490278       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:05:00.490291       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:05:00.490301       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:05:00.519562       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1216 05:05:00.519592       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:00.524209       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:00.524650       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:00.524587       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:05:00.524613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:05:00.625811       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 05:05:15 no-preload-990631 kubelet[719]: E1216 05:05:15.483088     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-990631" containerName="kube-controller-manager"
	Dec 16 05:05:17 no-preload-990631 kubelet[719]: E1216 05:05:17.900413     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-990631" containerName="etcd"
	Dec 16 05:05:18 no-preload-990631 kubelet[719]: E1216 05:05:18.641838     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-990631" containerName="etcd"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: E1216 05:05:21.557285     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: I1216 05:05:21.557327     719 scope.go:122] "RemoveContainer" containerID="97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: I1216 05:05:21.651471     719 scope.go:122] "RemoveContainer" containerID="97a585270f13242f4a3d89e07445d78f365f0d47510dcad4229c6c38e6bf6223"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: E1216 05:05:21.651733     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: I1216 05:05:21.651766     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:21 no-preload-990631 kubelet[719]: E1216 05:05:21.651955     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-grx64_kubernetes-dashboard(3210284c-350f-4805-ac81-eb9cb0e85476)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" podUID="3210284c-350f-4805-ac81-eb9cb0e85476"
	Dec 16 05:05:30 no-preload-990631 kubelet[719]: E1216 05:05:30.747598     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:30 no-preload-990631 kubelet[719]: I1216 05:05:30.747636     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:30 no-preload-990631 kubelet[719]: E1216 05:05:30.747845     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-grx64_kubernetes-dashboard(3210284c-350f-4805-ac81-eb9cb0e85476)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" podUID="3210284c-350f-4805-ac81-eb9cb0e85476"
	Dec 16 05:05:31 no-preload-990631 kubelet[719]: I1216 05:05:31.677909     719 scope.go:122] "RemoveContainer" containerID="9c81e85ff9b8473632b2161e2625db403f46ed30b89331fc578fdc16bf337529"
	Dec 16 05:05:36 no-preload-990631 kubelet[719]: E1216 05:05:36.432249     719 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7l2ds" containerName="coredns"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: E1216 05:05:44.557683     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: I1216 05:05:44.558269     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: I1216 05:05:44.715839     719 scope.go:122] "RemoveContainer" containerID="d6f60e7bb112856b9fe9e8a925ba7c2a2480ca771dbf1d27ef7621d9155a01b6"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: E1216 05:05:44.716102     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" containerName="dashboard-metrics-scraper"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: I1216 05:05:44.716139     719 scope.go:122] "RemoveContainer" containerID="20c8bffe747986bcea3cc9fd4275b5f931622998bb1dc9979d44281346604259"
	Dec 16 05:05:44 no-preload-990631 kubelet[719]: E1216 05:05:44.716331     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-grx64_kubernetes-dashboard(3210284c-350f-4805-ac81-eb9cb0e85476)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-grx64" podUID="3210284c-350f-4805-ac81-eb9cb0e85476"
	Dec 16 05:05:50 no-preload-990631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:05:50 no-preload-990631 kubelet[719]: I1216 05:05:50.323463     719 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 05:05:50 no-preload-990631 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:05:50 no-preload-990631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:05:50 no-preload-990631 systemd[1]: kubelet.service: Consumed 1.754s CPU time.
	
	
	==> kubernetes-dashboard [9fb7cb0f528cf2b4dfef928c1799678a2d9f7afec43110df5511ad9c2f36a8c5] <==
	2025/12/16 05:05:10 Starting overwatch
	2025/12/16 05:05:10 Using namespace: kubernetes-dashboard
	2025/12/16 05:05:10 Using in-cluster config to connect to apiserver
	2025/12/16 05:05:10 Using secret token for csrf signing
	2025/12/16 05:05:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:05:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:05:10 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/16 05:05:10 Generating JWE encryption key
	2025/12/16 05:05:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:05:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:05:11 Initializing JWE encryption key from synchronized object
	2025/12/16 05:05:11 Creating in-cluster Sidecar client
	2025/12/16 05:05:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:11 Serving insecurely on HTTP port: 9090
	2025/12/16 05:05:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [75d5b92ec924ecbdd7669c58da8eeaa55d6db46f7bfbcbba01a5c302a113eb7b] <==
	I1216 05:05:31.723635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:05:31.729858       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:05:31.729899       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:05:31.731864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:35.187464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:39.448654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:43.047943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:46.101917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:49.128222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:49.137957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:49.138397       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:05:49.141126       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-990631_e90612c3-9a5c-407b-8e79-d468426d8c3b!
	I1216 05:05:49.141748       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da628a4a-675d-4b94-b84f-777f27acf7a6", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-990631_e90612c3-9a5c-407b-8e79-d468426d8c3b became leader
	W1216 05:05:49.153366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:49.163610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:49.241503       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-990631_e90612c3-9a5c-407b-8e79-d468426d8c3b!
	W1216 05:05:51.167520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:51.172738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:53.176616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:53.210788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:55.214528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:55.222721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9c81e85ff9b8473632b2161e2625db403f46ed30b89331fc578fdc16bf337529] <==
	I1216 05:05:00.923744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:05:30.927056       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-990631 -n no-preload-990631
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-990631 -n no-preload-990631: exit status 2 (369.611117ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-990631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.663374ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-418580
helpers_test.go:244: (dbg) docker inspect newest-cni-418580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1",
	        "Created": "2025-12-16T05:05:28.422423027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:05:28.467732729Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/hosts",
	        "LogPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1-json.log",
	        "Name": "/newest-cni-418580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-418580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-418580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1",
	                "LowerDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-418580",
	                "Source": "/var/lib/docker/volumes/newest-cni-418580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-418580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-418580",
	                "name.minikube.sigs.k8s.io": "newest-cni-418580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5ba95262ca9602a436f4a8613400554c8585175bab02e272987e3291d190f94d",
	            "SandboxKey": "/var/run/docker/netns/5ba95262ca96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-418580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "401a0a2302958aa43185772bb4bb58f2878244e830bb4ed218c958e2da9161d5",
	                    "EndpointID": "70b60b5956db216026e8c0ffb64a16ac1d0dca78e73a581ab96060e50487074c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "aa:75:bf:8e:31:3f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-418580",
	                        "7ec1bd166949"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-418580 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p kubernetes-upgrade-132399                                                                                                                                                                                                                         │ kubernetes-upgrade-132399    │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ delete  │ -p disable-driver-mounts-148966                                                                                                                                                                                                                      │ disable-driver-mounts-148966 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-990631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:05:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:05:28.619499  293223 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:28.619651  293223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:28.619666  293223 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:28.619672  293223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:28.619982  293223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:28.620585  293223 out.go:368] Setting JSON to false
	I1216 05:05:28.621902  293223 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2880,"bootTime":1765858649,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:05:28.621960  293223 start.go:143] virtualization: kvm guest
	I1216 05:05:28.624132  293223 out.go:179] * [default-k8s-diff-port-375252] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:05:28.625828  293223 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:05:28.625847  293223 notify.go:221] Checking for updates...
	I1216 05:05:28.628337  293223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:05:28.629978  293223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:28.631193  293223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:05:28.632390  293223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:05:28.633511  293223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:05:28.635178  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:28.635713  293223 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:05:28.661922  293223 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:05:28.662051  293223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:28.718273  293223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-16 05:05:28.708319047 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:28.718425  293223 docker.go:319] overlay module found
	I1216 05:05:28.719951  293223 out.go:179] * Using the docker driver based on existing profile
	I1216 05:05:28.721068  293223 start.go:309] selected driver: docker
	I1216 05:05:28.721089  293223 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:28.721191  293223 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:05:28.721731  293223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:28.788963  293223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-16 05:05:28.778467948 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:28.789410  293223 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:28.789448  293223 cni.go:84] Creating CNI manager for ""
	I1216 05:05:28.789523  293223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:28.789569  293223 start.go:353] cluster config:
	{Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:28.791508  293223 out.go:179] * Starting "default-k8s-diff-port-375252" primary control-plane node in "default-k8s-diff-port-375252" cluster
	I1216 05:05:28.792954  293223 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:05:28.794079  293223 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	W1216 05:05:25.308136  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:27.308269  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:28.795105  293223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:28.795150  293223 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:05:28.795162  293223 cache.go:65] Caching tarball of preloaded images
	I1216 05:05:28.795205  293223 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:05:28.795272  293223 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:05:28.795286  293223 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:05:28.795400  293223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:05:28.816986  293223 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:05:28.817006  293223 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:05:28.817050  293223 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:05:28.817103  293223 start.go:360] acquireMachinesLock for default-k8s-diff-port-375252: {Name:mkb8cd554afbe4d541db5e350d6207558dad5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:05:28.817174  293223 start.go:364] duration metric: took 39.023µs to acquireMachinesLock for "default-k8s-diff-port-375252"
	I1216 05:05:28.817192  293223 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:05:28.817201  293223 fix.go:54] fixHost starting: 
	I1216 05:05:28.817485  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:28.837506  293223 fix.go:112] recreateIfNeeded on default-k8s-diff-port-375252: state=Stopped err=<nil>
	W1216 05:05:28.837533  293223 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:05:25.049152  292503 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:05:25.049357  292503 start.go:159] libmachine.API.Create for "newest-cni-418580" (driver="docker")
	I1216 05:05:25.049412  292503 client.go:173] LocalClient.Create starting
	I1216 05:05:25.049478  292503 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:05:25.049507  292503 main.go:143] libmachine: Decoding PEM data...
	I1216 05:05:25.049530  292503 main.go:143] libmachine: Parsing certificate...
	I1216 05:05:25.049578  292503 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:05:25.049595  292503 main.go:143] libmachine: Decoding PEM data...
	I1216 05:05:25.049605  292503 main.go:143] libmachine: Parsing certificate...
	I1216 05:05:25.049898  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:05:25.066218  292503 cli_runner.go:211] docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:05:25.066280  292503 network_create.go:284] running [docker network inspect newest-cni-418580] to gather additional debugging logs...
	I1216 05:05:25.066296  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580
	W1216 05:05:25.082484  292503 cli_runner.go:211] docker network inspect newest-cni-418580 returned with exit code 1
	I1216 05:05:25.082512  292503 network_create.go:287] error running [docker network inspect newest-cni-418580]: docker network inspect newest-cni-418580: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-418580 not found
	I1216 05:05:25.082540  292503 network_create.go:289] output of [docker network inspect newest-cni-418580]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-418580 not found
	
	** /stderr **
	I1216 05:05:25.082645  292503 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:25.099947  292503 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:05:25.100678  292503 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:05:25.101416  292503 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:05:25.102201  292503 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4cd001204df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:4c:f5:dc:0a:90} reservation:<nil>}
	I1216 05:05:25.103269  292503 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f240c0}
	I1216 05:05:25.103298  292503 network_create.go:124] attempt to create docker network newest-cni-418580 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 05:05:25.103367  292503 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-418580 newest-cni-418580
	I1216 05:05:25.151692  292503 network_create.go:108] docker network newest-cni-418580 192.168.85.0/24 created
	I1216 05:05:25.151725  292503 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-418580" container
	I1216 05:05:25.151806  292503 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:05:25.168787  292503 cli_runner.go:164] Run: docker volume create newest-cni-418580 --label name.minikube.sigs.k8s.io=newest-cni-418580 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:05:25.186120  292503 oci.go:103] Successfully created a docker volume newest-cni-418580
	I1216 05:05:25.186217  292503 cli_runner.go:164] Run: docker run --rm --name newest-cni-418580-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-418580 --entrypoint /usr/bin/test -v newest-cni-418580:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:05:25.583110  292503 oci.go:107] Successfully prepared a docker volume newest-cni-418580
	I1216 05:05:25.583174  292503 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:05:25.583183  292503 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:05:25.583245  292503 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-418580:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:05:28.338637  292503 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-418580:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (2.755349769s)
	I1216 05:05:28.338666  292503 kic.go:203] duration metric: took 2.755479724s to extract preloaded images to volume ...
	W1216 05:05:28.338763  292503 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:05:28.338816  292503 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:05:28.338866  292503 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:05:28.403669  292503 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-418580 --name newest-cni-418580 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-418580 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-418580 --network newest-cni-418580 --ip 192.168.85.2 --volume newest-cni-418580:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 05:05:28.722054  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Running}}
	I1216 05:05:28.742885  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.766670  292503 cli_runner.go:164] Run: docker exec newest-cni-418580 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:05:28.817325  292503 oci.go:144] the created container "newest-cni-418580" has a running status.
	I1216 05:05:28.817353  292503 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa...
	I1216 05:05:28.858625  292503 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:05:28.890286  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.914645  292503 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:05:28.914674  292503 kic_runner.go:114] Args: [docker exec --privileged newest-cni-418580 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:05:28.970557  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:28.995784  292503 machine.go:94] provisionDockerMachine start ...
	I1216 05:05:28.995879  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:29.018594  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:29.018845  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:29.018853  292503 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:05:29.019705  292503 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49150->127.0.0.1:33097: read: connection reset by peer
	W1216 05:05:29.106433  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:31.606893  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:28.839266  293223 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-375252" ...
	I1216 05:05:28.839325  293223 cli_runner.go:164] Run: docker start default-k8s-diff-port-375252
	I1216 05:05:29.113119  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:29.132662  293223 kic.go:430] container "default-k8s-diff-port-375252" state is running.
	I1216 05:05:29.133094  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:29.152639  293223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/config.json ...
	I1216 05:05:29.152879  293223 machine.go:94] provisionDockerMachine start ...
	I1216 05:05:29.152970  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:29.171277  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:29.171536  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:29.171549  293223 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:05:29.172205  293223 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51300->127.0.0.1:33103: read: connection reset by peer
	I1216 05:05:32.297657  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:05:32.297684  293223 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-375252"
	I1216 05:05:32.297751  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.318798  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.319051  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.319069  293223 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-375252 && echo "default-k8s-diff-port-375252" | sudo tee /etc/hostname
	I1216 05:05:32.456847  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-375252
	
	I1216 05:05:32.456922  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.476706  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.477045  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.477072  293223 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-375252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-375252/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-375252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:32.606389  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:32.606417  293223 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:32.606439  293223 ubuntu.go:190] setting up certificates
	I1216 05:05:32.606452  293223 provision.go:84] configureAuth start
	I1216 05:05:32.606505  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:32.627629  293223 provision.go:143] copyHostCerts
	I1216 05:05:32.627685  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:32.627701  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:32.627749  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:32.627836  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:32.627845  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:32.627865  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:32.627913  293223 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:32.627921  293223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:32.627939  293223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:32.627997  293223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-375252 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-375252 localhost minikube]
	I1216 05:05:32.730496  293223 provision.go:177] copyRemoteCerts
	I1216 05:05:32.730565  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:32.730600  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.751146  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:32.843812  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:32.861358  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 05:05:32.879619  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:05:32.898030  293223 provision.go:87] duration metric: took 291.544808ms to configureAuth
	I1216 05:05:32.898063  293223 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:32.898285  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:32.898408  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:32.918526  293223 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.918833  293223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1216 05:05:32.918878  293223 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:33.236179  293223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:33.236212  293223 machine.go:97] duration metric: took 4.083315908s to provisionDockerMachine
	I1216 05:05:33.236229  293223 start.go:293] postStartSetup for "default-k8s-diff-port-375252" (driver="docker")
	I1216 05:05:33.236246  293223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:33.236317  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:33.236391  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.255124  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.349976  293223 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:33.353576  293223 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:33.353609  293223 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:33.353622  293223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:33.353677  293223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:33.353784  293223 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:33.353908  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:33.361552  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:33.379312  293223 start.go:296] duration metric: took 143.06602ms for postStartSetup
	I1216 05:05:33.379398  293223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:33.379447  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.400517  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.491075  293223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:33.495538  293223 fix.go:56] duration metric: took 4.678331684s for fixHost
	I1216 05:05:33.495565  293223 start.go:83] releasing machines lock for "default-k8s-diff-port-375252", held for 4.678379719s
	I1216 05:05:33.495640  293223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-375252
	I1216 05:05:33.516725  293223 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:33.516805  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.516816  293223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:33.516895  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:33.536965  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:33.537322  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	W1216 05:05:29.308470  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:31.807551  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:32.147295  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-418580
	
	I1216 05:05:32.147323  292503 ubuntu.go:182] provisioning hostname "newest-cni-418580"
	I1216 05:05:32.147387  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.165073  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.165297  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.165310  292503 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-418580 && echo "newest-cni-418580" | sudo tee /etc/hostname
	I1216 05:05:32.302827  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-418580
	
	I1216 05:05:32.302918  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.322729  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.323008  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.323056  292503 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-418580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-418580/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-418580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:05:32.450471  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:05:32.450502  292503 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:05:32.450536  292503 ubuntu.go:190] setting up certificates
	I1216 05:05:32.450556  292503 provision.go:84] configureAuth start
	I1216 05:05:32.450612  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:32.471395  292503 provision.go:143] copyHostCerts
	I1216 05:05:32.471462  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:05:32.471473  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:05:32.471555  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:05:32.471711  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:05:32.471728  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:05:32.471774  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:05:32.471881  292503 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:05:32.471892  292503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:05:32.471939  292503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:05:32.472109  292503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-418580 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-418580]
	I1216 05:05:32.499918  292503 provision.go:177] copyRemoteCerts
	I1216 05:05:32.499967  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:05:32.499999  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.518449  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:32.614228  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:05:32.635678  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:05:32.653888  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:05:32.672293  292503 provision.go:87] duration metric: took 221.714395ms to configureAuth
	I1216 05:05:32.672321  292503 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:05:32.672501  292503 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:32.672618  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.693226  292503 main.go:143] libmachine: Using SSH client type: native
	I1216 05:05:32.693545  292503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I1216 05:05:32.693567  292503 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:05:32.968063  292503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:05:32.968093  292503 machine.go:97] duration metric: took 3.972285388s to provisionDockerMachine
	I1216 05:05:32.968105  292503 client.go:176] duration metric: took 7.918682906s to LocalClient.Create
	I1216 05:05:32.968125  292503 start.go:167] duration metric: took 7.918767525s to libmachine.API.Create "newest-cni-418580"
	I1216 05:05:32.968137  292503 start.go:293] postStartSetup for "newest-cni-418580" (driver="docker")
	I1216 05:05:32.968155  292503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:05:32.968222  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:05:32.968275  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:32.987417  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.081927  292503 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:05:33.086603  292503 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:05:33.086638  292503 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:05:33.086652  292503 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:05:33.086706  292503 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:05:33.086801  292503 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:05:33.086920  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:05:33.095668  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:33.117594  292503 start.go:296] duration metric: took 149.443086ms for postStartSetup
	I1216 05:05:33.117910  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:33.136597  292503 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/config.json ...
	I1216 05:05:33.136935  292503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:05:33.136989  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.157818  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.249294  292503 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:05:33.254260  292503 start.go:128] duration metric: took 8.207842149s to createHost
	I1216 05:05:33.254284  292503 start.go:83] releasing machines lock for "newest-cni-418580", held for 8.207977773s
	I1216 05:05:33.254363  292503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-418580
	I1216 05:05:33.272954  292503 ssh_runner.go:195] Run: cat /version.json
	I1216 05:05:33.273028  292503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:05:33.273042  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.273100  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:33.291480  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.293050  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:33.382769  292503 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:33.442424  292503 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:33.477702  292503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:33.482676  292503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:33.482740  292503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:33.509906  292503 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:05:33.509941  292503 start.go:496] detecting cgroup driver to use...
	I1216 05:05:33.509974  292503 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:33.510045  292503 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:33.529423  292503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:33.545150  292503 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:33.545215  292503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:33.563032  292503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:33.581565  292503 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:33.666314  292503 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:33.758723  292503 docker.go:234] disabling docker service ...
	I1216 05:05:33.758784  292503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:33.777100  292503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:33.790420  292503 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:33.893910  292503 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:33.985401  292503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:33.997878  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:34.012256  292503 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:34.012314  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.022649  292503 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:34.022700  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.031562  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.042658  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.053710  292503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:34.062165  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.070697  292503 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.083923  292503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.092579  292503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:34.099986  292503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:34.107536  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.189571  292503 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:34.334382  292503 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:34.334464  292503 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:34.338658  292503 start.go:564] Will wait 60s for crictl version
	I1216 05:05:34.338717  292503 ssh_runner.go:195] Run: which crictl
	I1216 05:05:34.342328  292503 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:34.366604  292503 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:34.366695  292503 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.396642  292503 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.433283  292503 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 05:05:34.435360  292503 cli_runner.go:164] Run: docker network inspect newest-cni-418580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:34.453408  292503 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:34.457897  292503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.470954  292503 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 05:05:33.681706  293223 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:33.688766  293223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:05:33.730139  293223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:05:33.735582  293223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:05:33.735658  293223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:05:33.743577  293223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:05:33.743600  293223 start.go:496] detecting cgroup driver to use...
	I1216 05:05:33.743629  293223 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:05:33.743661  293223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:05:33.757282  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:05:33.769587  293223 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:05:33.769641  293223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:05:33.785195  293223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:05:33.797271  293223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:05:33.890845  293223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:05:33.981546  293223 docker.go:234] disabling docker service ...
	I1216 05:05:33.981625  293223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:05:33.997128  293223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:05:34.009761  293223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:05:34.091773  293223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:05:34.181978  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:05:34.195138  293223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:05:34.209731  293223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:05:34.209779  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.218732  293223 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:05:34.218793  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.227814  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.236762  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.246292  293223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:05:34.254357  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.262928  293223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.271401  293223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:05:34.280072  293223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:05:34.288086  293223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:05:34.295663  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.380378  293223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:05:34.524358  293223 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:05:34.524440  293223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:05:34.528707  293223 start.go:564] Will wait 60s for crictl version
	I1216 05:05:34.528766  293223 ssh_runner.go:195] Run: which crictl
	I1216 05:05:34.532850  293223 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:05:34.559122  293223 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:05:34.559232  293223 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.588698  293223 ssh_runner.go:195] Run: crio --version
	I1216 05:05:34.622232  293223 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:05:34.472170  292503 kubeadm.go:884] updating cluster {Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:34.472345  292503 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:05:34.472410  292503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.506214  292503 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.506240  292503 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:34.506293  292503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.533288  292503 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.533310  292503 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:34.533332  292503 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:05:34.533435  292503 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-418580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:34.533515  292503 ssh_runner.go:195] Run: crio config
	I1216 05:05:34.582963  292503 cni.go:84] Creating CNI manager for ""
	I1216 05:05:34.582990  292503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:34.583009  292503 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 05:05:34.583050  292503 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-418580 NodeName:newest-cni-418580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:34.583227  292503 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-418580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:34.583301  292503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:05:34.592247  292503 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:34.592309  292503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:34.600451  292503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 05:05:34.615740  292503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:05:34.631629  292503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 05:05:34.645966  292503 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:34.649983  292503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.660983  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.754997  292503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:34.777628  292503 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580 for IP: 192.168.85.2
	I1216 05:05:34.777642  292503 certs.go:195] generating shared ca certs ...
	I1216 05:05:34.777658  292503 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.777857  292503 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:34.777914  292503 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:34.777931  292503 certs.go:257] generating profile certs ...
	I1216 05:05:34.778003  292503 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key
	I1216 05:05:34.778143  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt with IP's: []
	I1216 05:05:34.623929  293223 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-375252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:05:34.642569  293223 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:05:34.646797  293223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.657490  293223 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:05:34.657635  293223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:34.657700  293223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.694359  293223 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.694383  293223 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:05:34.694449  293223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:05:34.722765  293223 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:05:34.722791  293223 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:05:34.722798  293223 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1216 05:05:34.722885  293223 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-375252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:05:34.722943  293223 ssh_runner.go:195] Run: crio config
	I1216 05:05:34.768295  293223 cni.go:84] Creating CNI manager for ""
	I1216 05:05:34.768319  293223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:34.768330  293223 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:05:34.768351  293223 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-375252 NodeName:default-k8s-diff-port-375252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:05:34.768461  293223 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-375252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:05:34.768522  293223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:05:34.776980  293223 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:05:34.777057  293223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:05:34.784977  293223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1216 05:05:34.799152  293223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:05:34.814226  293223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1216 05:05:34.829713  293223 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:05:34.833416  293223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:05:34.843862  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:34.923578  293223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:34.953581  293223 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252 for IP: 192.168.76.2
	I1216 05:05:34.953601  293223 certs.go:195] generating shared ca certs ...
	I1216 05:05:34.953616  293223 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.953767  293223 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:05:34.953838  293223 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:05:34.953859  293223 certs.go:257] generating profile certs ...
	I1216 05:05:34.953970  293223 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/client.key
	I1216 05:05:34.954080  293223 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key.7bfc343f
	I1216 05:05:34.954137  293223 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key
	I1216 05:05:34.954290  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:34.954338  293223 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:34.954352  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:34.954447  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:34.954496  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:34.954531  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:34.954590  293223 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:34.955229  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:34.975322  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:34.994253  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:35.014878  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:35.040118  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 05:05:35.059441  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:05:35.076865  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:35.094221  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/default-k8s-diff-port-375252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:05:35.112726  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:35.130358  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:35.148722  293223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:35.166552  293223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:35.179253  293223 ssh_runner.go:195] Run: openssl version
	I1216 05:05:35.185464  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.194548  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:35.202787  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.207145  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.207204  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.243264  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:35.250997  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.258211  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:35.265357  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.269279  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.269321  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.304617  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.312873  293223 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.320543  293223 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:35.328266  293223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.331987  293223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.332071  293223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.369465  293223 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:35.377085  293223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:35.380947  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:05:35.422121  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:05:35.462335  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:05:35.508430  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:05:35.559688  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:05:35.615416  293223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:05:35.663433  293223 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-375252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-375252 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:35.663546  293223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:35.663626  293223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:35.697125  293223 cri.go:89] found id: "3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45"
	I1216 05:05:35.697150  293223 cri.go:89] found id: "fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3"
	I1216 05:05:35.697156  293223 cri.go:89] found id: "bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff"
	I1216 05:05:35.697169  293223 cri.go:89] found id: "5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01"
	I1216 05:05:35.697174  293223 cri.go:89] found id: ""
	I1216 05:05:35.697297  293223 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:05:35.710419  293223 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:35Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:35.710548  293223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:35.718597  293223 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:05:35.718613  293223 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:05:35.718655  293223 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:05:35.727178  293223 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:05:35.728054  293223 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-375252" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:35.728609  293223 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5066/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-375252" cluster setting kubeconfig missing "default-k8s-diff-port-375252" context setting]
	I1216 05:05:35.729803  293223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.732044  293223 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:05:35.741439  293223 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 05:05:35.741468  293223 kubeadm.go:602] duration metric: took 22.850213ms to restartPrimaryControlPlane
	I1216 05:05:35.741479  293223 kubeadm.go:403] duration metric: took 78.055706ms to StartCluster
	I1216 05:05:35.741495  293223 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.741561  293223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:35.743663  293223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.743920  293223 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:35.744192  293223 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:35.744255  293223 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:35.744337  293223 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744353  293223 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.744361  293223 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:05:35.744385  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.744837  293223 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744857  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.744859  293223 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-375252"
	I1216 05:05:35.744867  293223 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.744878  293223 addons.go:248] addon dashboard should already be in state true
	I1216 05:05:35.744882  293223 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-375252"
	I1216 05:05:35.744905  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.745234  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.745413  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.746307  293223 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:35.747617  293223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:35.779083  293223 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 05:05:35.779717  293223 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-375252"
	W1216 05:05:35.779773  293223 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:05:35.779803  293223 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:05:35.780704  293223 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:05:35.782924  293223 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 05:05:35.782935  293223 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:34.912976  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt ...
	I1216 05:05:34.913002  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.crt: {Name:mkf026a43bf34828560cdd861dc967a4da3a0a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.913190  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key ...
	I1216 05:05:34.913205  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/client.key: {Name:mk9f0f25e1ac61a51fd17fa02f026219d2b811d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.913291  292503 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e
	I1216 05:05:34.913325  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1216 05:05:34.970132  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e ...
	I1216 05:05:34.970158  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e: {Name:mk2d44e3904e6a658ff0ee37692a05dfb5d718cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.970338  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e ...
	I1216 05:05:34.970360  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e: {Name:mkca5d8ab902ee32a3562f650b1165f4e10184d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:34.970489  292503 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt.66d5fe9e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt
	I1216 05:05:34.970612  292503 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key.66d5fe9e -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key
	I1216 05:05:34.970698  292503 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key
	I1216 05:05:34.970726  292503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt with IP's: []
	I1216 05:05:35.230717  292503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt ...
	I1216 05:05:35.230762  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt: {Name:mk2bf2e2cf7c916f101948379e3dbb6fdb61a8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.230936  292503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key ...
	I1216 05:05:35.230951  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key: {Name:mkf0aafa828bb13a430c43eed8a9d89f985fe446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:35.231223  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:05:35.231284  292503 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:05:35.231307  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:05:35.231346  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:05:35.231400  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:05:35.231441  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:05:35.231498  292503 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:05:35.232006  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:05:35.250349  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:05:35.267912  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:05:35.285259  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:05:35.302305  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:05:35.320455  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:05:35.338139  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:05:35.355217  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:05:35.372674  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:05:35.396550  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:05:35.414853  292503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:05:35.433117  292503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:05:35.445803  292503 ssh_runner.go:195] Run: openssl version
	I1216 05:05:35.452761  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.462981  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:05:35.472831  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.478677  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.478741  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:05:35.527788  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:05:35.538626  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:05:35.549759  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.561348  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:05:35.569778  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.574227  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.574289  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:05:35.628396  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:05:35.639703  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:05:35.650632  292503 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.659686  292503 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:05:35.669120  292503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.673306  292503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.673366  292503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:05:35.722247  292503 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.731511  292503 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:05:35.740868  292503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:05:35.745336  292503 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:05:35.745401  292503 kubeadm.go:401] StartCluster: {Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:35.745487  292503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:05:35.746058  292503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:05:35.801678  292503 cri.go:89] found id: ""
	I1216 05:05:35.801751  292503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:05:35.816616  292503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:05:35.828849  292503 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:05:35.828906  292503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:05:35.849259  292503 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:05:35.849285  292503 kubeadm.go:158] found existing configuration files:
	
	I1216 05:05:35.849332  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:05:35.861581  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:05:35.861651  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:05:35.872359  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:05:35.882005  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:05:35.882094  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:05:35.892470  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:05:35.903276  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:05:35.903332  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:05:35.910839  292503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:05:35.918764  292503 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:05:35.918817  292503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:05:35.927327  292503 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:05:35.983497  292503 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:05:35.983639  292503 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:05:36.078326  292503 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:05:36.078423  292503 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:05:36.078478  292503 kubeadm.go:319] OS: Linux
	I1216 05:05:36.078538  292503 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:05:36.078632  292503 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:05:36.078725  292503 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:05:36.078801  292503 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:05:36.078868  292503 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:05:36.078935  292503 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:05:36.079014  292503 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:05:36.079098  292503 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:05:36.155571  292503 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:05:36.155718  292503 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:05:36.155845  292503 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:05:36.166440  292503 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 05:05:34.106689  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	W1216 05:05:36.109040  284202 pod_ready.go:104] pod "coredns-7d764666f9-7l2ds" is not "Ready", error: <nil>
	I1216 05:05:36.607281  284202 pod_ready.go:94] pod "coredns-7d764666f9-7l2ds" is "Ready"
	I1216 05:05:36.607311  284202 pod_ready.go:86] duration metric: took 34.506370037s for pod "coredns-7d764666f9-7l2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.609962  284202 pod_ready.go:83] waiting for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.613794  284202 pod_ready.go:94] pod "etcd-no-preload-990631" is "Ready"
	I1216 05:05:36.613816  284202 pod_ready.go:86] duration metric: took 3.832267ms for pod "etcd-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.615814  284202 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.619531  284202 pod_ready.go:94] pod "kube-apiserver-no-preload-990631" is "Ready"
	I1216 05:05:36.619551  284202 pod_ready.go:86] duration metric: took 3.7182ms for pod "kube-apiserver-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.621267  284202 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:36.805520  284202 pod_ready.go:94] pod "kube-controller-manager-no-preload-990631" is "Ready"
	I1216 05:05:36.805552  284202 pod_ready.go:86] duration metric: took 184.26649ms for pod "kube-controller-manager-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.004943  284202 pod_ready.go:83] waiting for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.405104  284202 pod_ready.go:94] pod "kube-proxy-2bjbr" is "Ready"
	I1216 05:05:37.405148  284202 pod_ready.go:86] duration metric: took 400.175347ms for pod "kube-proxy-2bjbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:37.605530  284202 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:38.005242  284202 pod_ready.go:94] pod "kube-scheduler-no-preload-990631" is "Ready"
	I1216 05:05:38.005269  284202 pod_ready.go:86] duration metric: took 399.71845ms for pod "kube-scheduler-no-preload-990631" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:38.005280  284202 pod_ready.go:40] duration metric: took 35.907405331s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:38.078833  284202 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:05:38.083862  284202 out.go:179] * Done! kubectl is now configured to use "no-preload-990631" cluster and "default" namespace by default
	I1216 05:05:36.168529  292503 out.go:252]   - Generating certificates and keys ...
	I1216 05:05:36.168629  292503 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:05:36.168732  292503 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:05:36.271180  292503 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:05:36.355639  292503 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:05:36.514877  292503 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:05:36.614071  292503 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:05:36.827128  292503 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:05:36.827351  292503 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-418580] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:05:36.974476  292503 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:05:36.974620  292503 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-418580] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:05:37.026221  292503 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 05:05:37.227397  292503 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 05:05:37.493691  292503 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 05:05:37.493853  292503 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:05:37.662278  292503 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:05:37.748294  292503 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:05:37.901537  292503 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:05:38.083466  292503 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:05:38.279485  292503 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:05:38.280369  292503 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:05:38.285287  292503 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:05:35.784693  293223 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:35.784717  293223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:35.784735  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 05:05:35.784752  293223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 05:05:35.784789  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.784814  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.819593  293223 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:35.819612  293223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:35.819664  293223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:05:35.819933  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.823853  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.844792  293223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:05:35.921834  293223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:35.938442  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:35.941483  293223 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-375252" to be "Ready" ...
	I1216 05:05:35.950146  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 05:05:35.950165  293223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 05:05:35.959892  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:35.969036  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 05:05:35.969063  293223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 05:05:35.989078  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 05:05:35.989103  293223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 05:05:36.011127  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 05:05:36.011150  293223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 05:05:36.031446  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 05:05:36.031478  293223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 05:05:36.045414  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 05:05:36.045444  293223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 05:05:36.064565  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 05:05:36.064589  293223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 05:05:36.081324  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 05:05:36.081348  293223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 05:05:36.095008  293223 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:36.095044  293223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 05:05:36.112771  293223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 05:05:38.170264  293223 node_ready.go:49] node "default-k8s-diff-port-375252" is "Ready"
	I1216 05:05:38.170298  293223 node_ready.go:38] duration metric: took 2.228781477s for node "default-k8s-diff-port-375252" to be "Ready" ...
	I1216 05:05:38.170312  293223 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:38.170360  293223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:38.776267  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.837791808s)
	I1216 05:05:38.776306  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.816384453s)
	I1216 05:05:38.776428  293223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.663623652s)
	I1216 05:05:38.776458  293223 api_server.go:72] duration metric: took 3.032511707s to wait for apiserver process to appear ...
	I1216 05:05:38.776471  293223 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:38.776496  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:38.778346  293223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-375252 addons enable metrics-server
	
	I1216 05:05:38.781799  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:38.781820  293223 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:38.784278  293223 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1216 05:05:33.808006  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:35.812587  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	W1216 05:05:38.308806  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:38.287293  292503 out.go:252]   - Booting up control plane ...
	I1216 05:05:38.287409  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:05:38.287505  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:05:38.287591  292503 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:05:38.306562  292503 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:05:38.306822  292503 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:05:38.315641  292503 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:05:38.317230  292503 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:05:38.317321  292503 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:05:38.439332  292503 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:05:38.439522  292503 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:05:38.940884  292503 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.794985ms
	I1216 05:05:38.943858  292503 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 05:05:38.943984  292503 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 05:05:38.944138  292503 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 05:05:38.944228  292503 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 05:05:39.947945  292503 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004000869s
	I1216 05:05:41.136529  292503 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.192438136s
	I1216 05:05:42.945750  292503 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001716849s
	I1216 05:05:42.965946  292503 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 05:05:42.978062  292503 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 05:05:42.988785  292503 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 05:05:42.989107  292503 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-418580 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 05:05:42.998633  292503 kubeadm.go:319] [bootstrap-token] Using token: w2w8nj.tpu6zggrwl1bblxh
	I1216 05:05:38.785917  293223 addons.go:530] duration metric: took 3.04166463s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:05:39.277101  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:39.282770  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:05:39.282808  293223 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:05:39.777190  293223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1216 05:05:39.781241  293223 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1216 05:05:39.782224  293223 api_server.go:141] control plane version: v1.34.2
	I1216 05:05:39.782245  293223 api_server.go:131] duration metric: took 1.005765809s to wait for apiserver health ...
	I1216 05:05:39.782254  293223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:39.785805  293223 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:39.785846  293223 system_pods.go:61] "coredns-66bc5c9577-65x5z" [8c610230-fb3c-4bef-a6d3-8870276ed93e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:39.785855  293223 system_pods.go:61] "etcd-default-k8s-diff-port-375252" [c04dc1e6-e72a-4915-8ed7-4976b69962c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:39.785870  293223 system_pods.go:61] "kindnet-xml94" [2f0ff435-e0ff-441e-bdf8-0cf9a6da784c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:39.785882  293223 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-375252" [51755996-a6d5-4427-8ecf-d5635d5bae94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:39.785897  293223 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-375252" [01874cfa-681a-48b1-8378-2f33c21185ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:39.785908  293223 system_pods.go:61] "kube-proxy-dmw9t" [c72d3d28-eaa9-4a35-b7b9-5207edf40372] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:39.785916  293223 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-375252" [e1c1c032-6d2a-4ca9-bf67-137e7f061874] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:39.785927  293223 system_pods.go:61] "storage-provisioner" [e819c1d3-f079-4b3e-a4b1-9d2cfbf37344] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:39.785934  293223 system_pods.go:74] duration metric: took 3.674729ms to wait for pod list to return data ...
	I1216 05:05:39.785950  293223 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:39.788192  293223 default_sa.go:45] found service account: "default"
	I1216 05:05:39.788211  293223 default_sa.go:55] duration metric: took 2.254358ms for default service account to be created ...
	I1216 05:05:39.788220  293223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:05:39.790710  293223 system_pods.go:86] 8 kube-system pods found
	I1216 05:05:39.790733  293223 system_pods.go:89] "coredns-66bc5c9577-65x5z" [8c610230-fb3c-4bef-a6d3-8870276ed93e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:05:39.790755  293223 system_pods.go:89] "etcd-default-k8s-diff-port-375252" [c04dc1e6-e72a-4915-8ed7-4976b69962c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:05:39.790763  293223 system_pods.go:89] "kindnet-xml94" [2f0ff435-e0ff-441e-bdf8-0cf9a6da784c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:39.790768  293223 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-375252" [51755996-a6d5-4427-8ecf-d5635d5bae94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:05:39.790775  293223 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-375252" [01874cfa-681a-48b1-8378-2f33c21185ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:05:39.790782  293223 system_pods.go:89] "kube-proxy-dmw9t" [c72d3d28-eaa9-4a35-b7b9-5207edf40372] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:39.790788  293223 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-375252" [e1c1c032-6d2a-4ca9-bf67-137e7f061874] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:05:39.790796  293223 system_pods.go:89] "storage-provisioner" [e819c1d3-f079-4b3e-a4b1-9d2cfbf37344] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 05:05:39.790802  293223 system_pods.go:126] duration metric: took 2.577172ms to wait for k8s-apps to be running ...
	I1216 05:05:39.790808  293223 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:05:39.790842  293223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:39.803851  293223 system_svc.go:56] duration metric: took 13.033531ms WaitForService to wait for kubelet
	I1216 05:05:39.803883  293223 kubeadm.go:587] duration metric: took 4.059935198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:39.803905  293223 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:39.807402  293223 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:39.807433  293223 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:39.807450  293223 node_conditions.go:105] duration metric: took 3.538982ms to run NodePressure ...
	I1216 05:05:39.807465  293223 start.go:242] waiting for startup goroutines ...
	I1216 05:05:39.807473  293223 start.go:247] waiting for cluster config update ...
	I1216 05:05:39.807485  293223 start.go:256] writing updated cluster config ...
	I1216 05:05:39.807774  293223 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:39.811763  293223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:39.815359  293223 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-65x5z" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:05:41.820227  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:40.310699  285878 pod_ready.go:104] pod "coredns-66bc5c9577-47h64" is not "Ready", error: <nil>
	I1216 05:05:42.309120  285878 pod_ready.go:94] pod "coredns-66bc5c9577-47h64" is "Ready"
	I1216 05:05:42.309150  285878 pod_ready.go:86] duration metric: took 33.006648417s for pod "coredns-66bc5c9577-47h64" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.311734  285878 pod_ready.go:83] waiting for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.318072  285878 pod_ready.go:94] pod "etcd-embed-certs-127599" is "Ready"
	I1216 05:05:42.318101  285878 pod_ready.go:86] duration metric: took 6.341713ms for pod "etcd-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.320261  285878 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.324744  285878 pod_ready.go:94] pod "kube-apiserver-embed-certs-127599" is "Ready"
	I1216 05:05:42.324772  285878 pod_ready.go:86] duration metric: took 4.487423ms for pod "kube-apiserver-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.327251  285878 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.506418  285878 pod_ready.go:94] pod "kube-controller-manager-embed-certs-127599" is "Ready"
	I1216 05:05:42.506458  285878 pod_ready.go:86] duration metric: took 179.18035ms for pod "kube-controller-manager-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:42.706844  285878 pod_ready.go:83] waiting for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.107603  285878 pod_ready.go:94] pod "kube-proxy-j79cw" is "Ready"
	I1216 05:05:43.107634  285878 pod_ready.go:86] duration metric: took 400.764534ms for pod "kube-proxy-j79cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.306943  285878 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.706579  285878 pod_ready.go:94] pod "kube-scheduler-embed-certs-127599" is "Ready"
	I1216 05:05:43.706605  285878 pod_ready.go:86] duration metric: took 399.628671ms for pod "kube-scheduler-embed-certs-127599" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:05:43.706621  285878 pod_ready.go:40] duration metric: took 34.408131714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:05:43.766223  285878 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:05:43.767753  285878 out.go:179] * Done! kubectl is now configured to use "embed-certs-127599" cluster and "default" namespace by default
	I1216 05:05:43.000076  292503 out.go:252]   - Configuring RBAC rules ...
	I1216 05:05:43.000232  292503 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 05:05:43.006930  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 05:05:43.013081  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 05:05:43.015993  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 05:05:43.019821  292503 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 05:05:43.022802  292503 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 05:05:43.361353  292503 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 05:05:43.785608  292503 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 05:05:44.352605  292503 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 05:05:44.353897  292503 kubeadm.go:319] 
	I1216 05:05:44.353997  292503 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 05:05:44.354009  292503 kubeadm.go:319] 
	I1216 05:05:44.354128  292503 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 05:05:44.354140  292503 kubeadm.go:319] 
	I1216 05:05:44.354176  292503 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 05:05:44.354269  292503 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 05:05:44.354347  292503 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 05:05:44.354358  292503 kubeadm.go:319] 
	I1216 05:05:44.354432  292503 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 05:05:44.354442  292503 kubeadm.go:319] 
	I1216 05:05:44.354516  292503 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 05:05:44.354538  292503 kubeadm.go:319] 
	I1216 05:05:44.354607  292503 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 05:05:44.354731  292503 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 05:05:44.354839  292503 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 05:05:44.354850  292503 kubeadm.go:319] 
	I1216 05:05:44.354979  292503 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 05:05:44.355113  292503 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 05:05:44.355125  292503 kubeadm.go:319] 
	I1216 05:05:44.355233  292503 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w2w8nj.tpu6zggrwl1bblxh \
	I1216 05:05:44.355379  292503 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 05:05:44.355418  292503 kubeadm.go:319] 	--control-plane 
	I1216 05:05:44.355425  292503 kubeadm.go:319] 
	I1216 05:05:44.355541  292503 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 05:05:44.355547  292503 kubeadm.go:319] 
	I1216 05:05:44.355652  292503 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w2w8nj.tpu6zggrwl1bblxh \
	I1216 05:05:44.355787  292503 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 05:05:44.359394  292503 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:05:44.359535  292503 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:05:44.359550  292503 cni.go:84] Creating CNI manager for ""
	I1216 05:05:44.359559  292503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:44.361669  292503 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 05:05:44.362987  292503 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 05:05:44.368597  292503 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1216 05:05:44.368615  292503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 05:05:44.386199  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1216 05:05:43.822303  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:45.823500  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:05:48.320056  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	I1216 05:05:44.892254  292503 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:05:44.892325  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:44.892390  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-418580 minikube.k8s.io/updated_at=2025_12_16T05_05_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=newest-cni-418580 minikube.k8s.io/primary=true
	I1216 05:05:44.905578  292503 ops.go:34] apiserver oom_adj: -16
	I1216 05:05:44.996594  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:45.497238  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:45.997166  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:46.496875  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:46.997245  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:47.497214  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:47.997272  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:48.497203  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:48.996890  292503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:05:49.067493  292503 kubeadm.go:1114] duration metric: took 4.175226314s to wait for elevateKubeSystemPrivileges
	I1216 05:05:49.067533  292503 kubeadm.go:403] duration metric: took 13.322133667s to StartCluster
	I1216 05:05:49.067556  292503 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:49.067620  292503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:49.070444  292503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:49.070849  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:05:49.071407  292503 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:49.072456  292503 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:05:49.072571  292503 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-418580"
	I1216 05:05:49.072594  292503 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-418580"
	I1216 05:05:49.072627  292503 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:05:49.072680  292503 addons.go:70] Setting default-storageclass=true in profile "newest-cni-418580"
	I1216 05:05:49.072712  292503 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-418580"
	I1216 05:05:49.073075  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.072422  292503 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:49.073196  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.074924  292503 out.go:179] * Verifying Kubernetes components...
	I1216 05:05:49.077139  292503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:05:49.098754  292503 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:05:49.100209  292503 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:49.100235  292503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:05:49.100297  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:49.101275  292503 addons.go:239] Setting addon default-storageclass=true in "newest-cni-418580"
	I1216 05:05:49.101340  292503 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:05:49.101798  292503 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:49.131640  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:49.132569  292503 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:49.132591  292503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:05:49.132686  292503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:05:49.173979  292503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:05:49.209645  292503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:05:49.255857  292503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:05:49.269695  292503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:05:49.303402  292503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:05:49.394100  292503 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1216 05:05:49.395894  292503 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:05:49.395955  292503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:49.590389  292503 api_server.go:72] duration metric: took 517.235446ms to wait for apiserver process to appear ...
	I1216 05:05:49.590414  292503 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:05:49.590436  292503 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:05:49.596111  292503 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:05:49.597185  292503 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:05:49.597220  292503 api_server.go:131] duration metric: took 6.79936ms to wait for apiserver health ...
	I1216 05:05:49.597229  292503 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:05:49.600173  292503 system_pods.go:59] 8 kube-system pods found
	I1216 05:05:49.600202  292503 system_pods.go:61] "coredns-7d764666f9-67tdz" [2e7ab1b2-db44-41e7-9601-fe1fb0f149b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:05:49.600210  292503 system_pods.go:61] "etcd-newest-cni-418580" [5345f203-8998-46d2-af9b-9f60ee31c9b1] Running
	I1216 05:05:49.600220  292503 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 05:05:49.600222  292503 system_pods.go:61] "kindnet-btffs" [54b00649-4160-4dab-bce1-83080b927bde] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:05:49.600237  292503 system_pods.go:61] "kube-apiserver-newest-cni-418580" [922b709d-f979-411d-87d3-1d6e2085c405] Running
	I1216 05:05:49.600308  292503 system_pods.go:61] "kube-controller-manager-newest-cni-418580" [41246201-9b24-43b9-a291-b5d087085e7a] Running
	I1216 05:05:49.600335  292503 system_pods.go:61] "kube-proxy-ls9s9" [dbf8d374-0c38-4c64-ac2c-9a71db8b4223] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:05:49.600346  292503 system_pods.go:61] "kube-scheduler-newest-cni-418580" [7a507411-b5e7-4bc1-8740-e07e3b476f8d] Running
	I1216 05:05:49.600363  292503 system_pods.go:61] "storage-provisioner" [fa6b12e1-67b9-40bb-93c3-42a32f0d9ed4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:05:49.600373  292503 system_pods.go:74] duration metric: took 3.137662ms to wait for pod list to return data ...
	I1216 05:05:49.600386  292503 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:05:49.601530  292503 addons.go:530] duration metric: took 529.068992ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:05:49.602559  292503 default_sa.go:45] found service account: "default"
	I1216 05:05:49.602574  292503 default_sa.go:55] duration metric: took 2.179965ms for default service account to be created ...
	I1216 05:05:49.602582  292503 kubeadm.go:587] duration metric: took 529.436023ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 05:05:49.602608  292503 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:05:49.604976  292503 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:05:49.605000  292503 node_conditions.go:123] node cpu capacity is 8
	I1216 05:05:49.605061  292503 node_conditions.go:105] duration metric: took 2.447077ms to run NodePressure ...
	I1216 05:05:49.605076  292503 start.go:242] waiting for startup goroutines ...
	I1216 05:05:49.899603  292503 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-418580" context rescaled to 1 replicas
	I1216 05:05:49.899646  292503 start.go:247] waiting for cluster config update ...
	I1216 05:05:49.899661  292503 start.go:256] writing updated cluster config ...
	I1216 05:05:49.900004  292503 ssh_runner.go:195] Run: rm -f paused
	I1216 05:05:49.957073  292503 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:05:49.959152  292503 out.go:179] * Done! kubectl is now configured to use "newest-cni-418580" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.455126484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.456991063Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f3e7e750-bd68-4831-b863-9378a660a6b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.459359129Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d8e35533-e1ba-4859-9252-131c1cf7a5cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.460185717Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.460833139Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.461084647Z" level=info msg="Ran pod sandbox 6b91d56821e144b9f2d3065924f0ec0e897fd640602cbd81d31e2a16e43aba3c with infra container: kube-system/kube-proxy-ls9s9/POD" id=f3e7e750-bd68-4831-b863-9378a660a6b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.461747364Z" level=info msg="Ran pod sandbox 529f57f47dd096a80e8ec05297ec1d312442ff467c1f91f6786fbc617fbcb5fd with infra container: kube-system/kindnet-btffs/POD" id=d8e35533-e1ba-4859-9252-131c1cf7a5cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.462303747Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3497e5fb-077c-4029-8aa9-9813547a63ba name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.462647527Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3d9b9dca-bdac-4f72-b5c6-d1df962ba816 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.463221474Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a0ee1f5e-dfcb-48f1-b180-b4f2a1639bba name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.463442947Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8c6b8949-58ba-442f-b533-0a42acf41975 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.467167715Z" level=info msg="Creating container: kube-system/kube-proxy-ls9s9/kube-proxy" id=5dbf22a3-2660-411e-ad54-7813e575fc24 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.467290492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.468982032Z" level=info msg="Creating container: kube-system/kindnet-btffs/kindnet-cni" id=4d7212e0-138b-46f4-9635-acd55674e49a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.469099844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.474467954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.474983166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.474995085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.475484065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.497879988Z" level=info msg="Created container 21650a5c5e9523045c0bf6e29e5590cfd19a667c99fbccbd7956e1c7ab7a4737: kube-system/kindnet-btffs/kindnet-cni" id=4d7212e0-138b-46f4-9635-acd55674e49a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.49870361Z" level=info msg="Starting container: 21650a5c5e9523045c0bf6e29e5590cfd19a667c99fbccbd7956e1c7ab7a4737" id=f7cdbffb-922a-47ce-881e-d3548c894205 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.501057299Z" level=info msg="Started container" PID=1599 containerID=21650a5c5e9523045c0bf6e29e5590cfd19a667c99fbccbd7956e1c7ab7a4737 description=kube-system/kindnet-btffs/kindnet-cni id=f7cdbffb-922a-47ce-881e-d3548c894205 name=/runtime.v1.RuntimeService/StartContainer sandboxID=529f57f47dd096a80e8ec05297ec1d312442ff467c1f91f6786fbc617fbcb5fd
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.503994593Z" level=info msg="Created container 2d1b2b9c316011dd0479a5954858881997b6ecb3fe3f7ea33ed45bce47cad7be: kube-system/kube-proxy-ls9s9/kube-proxy" id=5dbf22a3-2660-411e-ad54-7813e575fc24 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.504586395Z" level=info msg="Starting container: 2d1b2b9c316011dd0479a5954858881997b6ecb3fe3f7ea33ed45bce47cad7be" id=f1c64ffc-279a-4dc8-b992-c0880f20eefa name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:49 newest-cni-418580 crio[781]: time="2025-12-16T05:05:49.507285303Z" level=info msg="Started container" PID=1600 containerID=2d1b2b9c316011dd0479a5954858881997b6ecb3fe3f7ea33ed45bce47cad7be description=kube-system/kube-proxy-ls9s9/kube-proxy id=f1c64ffc-279a-4dc8-b992-c0880f20eefa name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b91d56821e144b9f2d3065924f0ec0e897fd640602cbd81d31e2a16e43aba3c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	21650a5c5e952       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   529f57f47dd09       kindnet-btffs                               kube-system
	2d1b2b9c31601       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   6b91d56821e14       kube-proxy-ls9s9                            kube-system
	966bd9cddeab5       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   12 seconds ago      Running             kube-apiserver            0                   3ff5c661e9361       kube-apiserver-newest-cni-418580            kube-system
	b96ab03e92029       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   12 seconds ago      Running             kube-controller-manager   0                   b1fb81cfd285a       kube-controller-manager-newest-cni-418580   kube-system
	80358715cd202       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   12 seconds ago      Running             kube-scheduler            0                   6c60285655495       kube-scheduler-newest-cni-418580            kube-system
	20e6186f42005       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   12 seconds ago      Running             etcd                      0                   87bb20b625d00       etcd-newest-cni-418580                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-418580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-418580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=newest-cni-418580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_05_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:05:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-418580
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:05:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:05:43 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:05:43 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:05:43 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 16 Dec 2025 05:05:43 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-418580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                964ef0bc-6b69-490b-a586-8c76d18cc2c2
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-418580                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-btffs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-418580             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-418580    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-ls9s9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-418580             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-418580 event: Registered Node newest-cni-418580 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [20e6186f42005e68d8bb66ce058d41a12a3b382b8a476645999b3a89c5241f5e] <==
	{"level":"warn","ts":"2025-12-16T05:05:40.480932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.487619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.494451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.501235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.509361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.516026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.522894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.553806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.561567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.569673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.577215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.583905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.590168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.597301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.610508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.625345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.632070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.638387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.644933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:40.691640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:44.855407Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.45551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-newest-cni-418580\" limit:1 ","response":"range_response_count:1 size:4418"}
	{"level":"info","ts":"2025-12-16T05:05:44.856279Z","caller":"traceutil/trace.go:172","msg":"trace[271176981] range","detail":"{range_begin:/registry/pods/kube-system/etcd-newest-cni-418580; range_end:; response_count:1; response_revision:266; }","duration":"146.344201ms","start":"2025-12-16T05:05:44.709918Z","end":"2025-12-16T05:05:44.856262Z","steps":["trace[271176981] 'agreement among raft nodes before linearized reading'  (duration: 145.342594ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:05:44.855406Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.283053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-418580\" limit:1 ","response":"range_response_count:1 size:3392"}
	{"level":"info","ts":"2025-12-16T05:05:44.856791Z","caller":"traceutil/trace.go:172","msg":"trace[882855263] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-418580; range_end:; response_count:1; response_revision:266; }","duration":"147.682691ms","start":"2025-12-16T05:05:44.709090Z","end":"2025-12-16T05:05:44.856772Z","steps":["trace[882855263] 'agreement among raft nodes before linearized reading'  (duration: 146.16716ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:05:44.855588Z","caller":"traceutil/trace.go:172","msg":"trace[1173259968] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"128.300347ms","start":"2025-12-16T05:05:44.727269Z","end":"2025-12-16T05:05:44.855569Z","steps":["trace[1173259968] 'process raft request'  (duration: 128.14563ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:05:51 up 48 min,  0 user,  load average: 4.18, 2.98, 2.04
	Linux newest-cni-418580 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [21650a5c5e9523045c0bf6e29e5590cfd19a667c99fbccbd7956e1c7ab7a4737] <==
	I1216 05:05:49.791956       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:05:49.792204       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 05:05:49.792325       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:05:49.792339       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:05:49.792360       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:05:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:05:49.992162       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:05:50.089334       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:05:50.089451       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:05:50.089674       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:05:50.489965       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:05:50.489992       1 metrics.go:72] Registering metrics
	I1216 05:05:50.490484       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [966bd9cddeab59f271d1ca6112182c03e0075514e1929197453969a3c7beb7f8] <==
	I1216 05:05:41.169571       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 05:05:41.169597       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1216 05:05:41.169709       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:05:41.169761       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:05:41.169777       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:05:41.176185       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1216 05:05:41.205103       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:41.371102       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:05:42.073115       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1216 05:05:42.076978       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1216 05:05:42.076993       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 05:05:42.560427       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:05:42.594872       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:05:42.678958       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1216 05:05:42.685162       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1216 05:05:42.686224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:05:42.690788       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:05:43.099933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:05:43.761967       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:05:43.784262       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1216 05:05:43.801149       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 05:05:48.650822       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:05:48.801634       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:48.806571       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:49.102260       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b96ab03e920297f5b0ee81d767dbf74f826ff81d780de302031ccfc63a694c54] <==
	I1216 05:05:47.904593       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.904725       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.904753       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.904790       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.904801       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.904844       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.904871       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905339       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905357       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905368       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.904792       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905705       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905801       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905811       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905833       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.905850       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.906216       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.906331       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:47.910679       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:47.914361       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-418580" podCIDRs=["10.42.0.0/24"]
	I1216 05:05:47.917987       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:48.005105       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:48.005125       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 05:05:48.005132       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 05:05:48.010885       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [2d1b2b9c316011dd0479a5954858881997b6ecb3fe3f7ea33ed45bce47cad7be] <==
	I1216 05:05:49.544298       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:05:49.627340       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:05:49.727621       1 shared_informer.go:377] "Caches are synced"
	I1216 05:05:49.727715       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 05:05:49.727828       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:05:49.747752       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:05:49.747801       1 server_linux.go:136] "Using iptables Proxier"
	I1216 05:05:49.753246       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:05:49.753684       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 05:05:49.753706       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:49.755066       1 config.go:200] "Starting service config controller"
	I1216 05:05:49.755135       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:05:49.755114       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:05:49.755160       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:05:49.755107       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:05:49.755184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:05:49.755220       1 config.go:309] "Starting node config controller"
	I1216 05:05:49.755235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:05:49.755252       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:05:49.855843       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:05:49.855945       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:05:49.855956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [80358715cd202ae80c758dddc09f1256e1b46795d6cd75fdc8c84c67ece8c5f8] <==
	E1216 05:05:42.067584       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1216 05:05:42.068741       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1216 05:05:42.071820       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1216 05:05:42.072912       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1216 05:05:42.101264       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1216 05:05:42.102317       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1216 05:05:42.185054       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 05:05:42.186057       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1216 05:05:42.201230       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 05:05:42.202326       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1216 05:05:42.202384       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1216 05:05:42.203367       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1216 05:05:42.232745       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1216 05:05:42.233782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1216 05:05:42.240054       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1216 05:05:42.241014       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1216 05:05:42.270401       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1216 05:05:42.271604       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1216 05:05:42.323243       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1216 05:05:42.324345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1216 05:05:42.336155       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1216 05:05:42.337312       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1216 05:05:42.396486       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1216 05:05:42.397464       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1216 05:05:44.031484       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 05:05:44 newest-cni-418580 kubelet[1319]: E1216 05:05:44.862663    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-418580" containerName="kube-apiserver"
	Dec 16 05:05:44 newest-cni-418580 kubelet[1319]: I1216 05:05:44.874160    1319 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-418580" podStartSLOduration=1.874121548 podStartE2EDuration="1.874121548s" podCreationTimestamp="2025-12-16 05:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:05:44.873666639 +0000 UTC m=+1.323207638" watchObservedRunningTime="2025-12-16 05:05:44.874121548 +0000 UTC m=+1.323662547"
	Dec 16 05:05:44 newest-cni-418580 kubelet[1319]: I1216 05:05:44.874330    1319 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-418580" podStartSLOduration=1.87432058 podStartE2EDuration="1.87432058s" podCreationTimestamp="2025-12-16 05:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:05:44.860399109 +0000 UTC m=+1.309940109" watchObservedRunningTime="2025-12-16 05:05:44.87432058 +0000 UTC m=+1.323861579"
	Dec 16 05:05:44 newest-cni-418580 kubelet[1319]: I1216 05:05:44.888411    1319 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-418580" podStartSLOduration=1.888390174 podStartE2EDuration="1.888390174s" podCreationTimestamp="2025-12-16 05:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:05:44.888150394 +0000 UTC m=+1.337691392" watchObservedRunningTime="2025-12-16 05:05:44.888390174 +0000 UTC m=+1.337931175"
	Dec 16 05:05:44 newest-cni-418580 kubelet[1319]: I1216 05:05:44.904823    1319 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-418580" podStartSLOduration=1.904688985 podStartE2EDuration="1.904688985s" podCreationTimestamp="2025-12-16 05:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:05:44.903833587 +0000 UTC m=+1.353374586" watchObservedRunningTime="2025-12-16 05:05:44.904688985 +0000 UTC m=+1.354230001"
	Dec 16 05:05:45 newest-cni-418580 kubelet[1319]: E1216 05:05:45.679925    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-418580" containerName="kube-apiserver"
	Dec 16 05:05:45 newest-cni-418580 kubelet[1319]: E1216 05:05:45.680004    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-418580" containerName="kube-scheduler"
	Dec 16 05:05:45 newest-cni-418580 kubelet[1319]: E1216 05:05:45.680097    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-418580" containerName="etcd"
	Dec 16 05:05:46 newest-cni-418580 kubelet[1319]: E1216 05:05:46.682227    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-418580" containerName="kube-apiserver"
	Dec 16 05:05:46 newest-cni-418580 kubelet[1319]: E1216 05:05:46.682330    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-418580" containerName="kube-scheduler"
	Dec 16 05:05:48 newest-cni-418580 kubelet[1319]: I1216 05:05:48.015508    1319 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 16 05:05:48 newest-cni-418580 kubelet[1319]: I1216 05:05:48.016298    1319 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.170903    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-kube-proxy\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.170957    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-lib-modules\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.170986    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sss9\" (UniqueName: \"kubernetes.io/projected/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-kube-api-access-8sss9\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.171012    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-cni-cfg\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.171221    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-lib-modules\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.171305    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-xtables-lock\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.171346    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9s8\" (UniqueName: \"kubernetes.io/projected/54b00649-4160-4dab-bce1-83080b927bde-kube-api-access-8q9s8\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.171379    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-xtables-lock\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.702072    1319 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-btffs" podStartSLOduration=0.702053911 podStartE2EDuration="702.053911ms" podCreationTimestamp="2025-12-16 05:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:05:49.701705321 +0000 UTC m=+6.151246320" watchObservedRunningTime="2025-12-16 05:05:49.702053911 +0000 UTC m=+6.151594910"
	Dec 16 05:05:49 newest-cni-418580 kubelet[1319]: I1216 05:05:49.711160    1319 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ls9s9" podStartSLOduration=0.711140547 podStartE2EDuration="711.140547ms" podCreationTimestamp="2025-12-16 05:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 05:05:49.711057865 +0000 UTC m=+6.160598866" watchObservedRunningTime="2025-12-16 05:05:49.711140547 +0000 UTC m=+6.160681547"
	Dec 16 05:05:50 newest-cni-418580 kubelet[1319]: E1216 05:05:50.228932    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-418580" containerName="kube-scheduler"
	Dec 16 05:05:50 newest-cni-418580 kubelet[1319]: E1216 05:05:50.594925    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-418580" containerName="kube-apiserver"
	Dec 16 05:05:50 newest-cni-418580 kubelet[1319]: E1216 05:05:50.984541    1319 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-418580" containerName="kube-controller-manager"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418580 -n newest-cni-418580
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-418580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-67tdz storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner: exit status 1 (62.544953ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-67tdz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-127599 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-127599 --alsologtostderr -v=1: exit status 80 (1.829128642s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-127599 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:05:55.612735  301306 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:55.612845  301306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:55.612853  301306 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:55.612856  301306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:55.613094  301306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:55.613350  301306 out.go:368] Setting JSON to false
	I1216 05:05:55.613367  301306 mustload.go:66] Loading cluster: embed-certs-127599
	I1216 05:05:55.613723  301306 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:55.614142  301306 cli_runner.go:164] Run: docker container inspect embed-certs-127599 --format={{.State.Status}}
	I1216 05:05:55.641940  301306 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:55.642428  301306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:55.713664  301306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-16 05:05:55.702417325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:55.714582  301306 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-127599 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 05:05:55.717419  301306 out.go:179] * Pausing node embed-certs-127599 ... 
	I1216 05:05:55.718707  301306 host.go:66] Checking if "embed-certs-127599" exists ...
	I1216 05:05:55.719073  301306 ssh_runner.go:195] Run: systemctl --version
	I1216 05:05:55.719132  301306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-127599
	I1216 05:05:55.746092  301306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/embed-certs-127599/id_rsa Username:docker}
	I1216 05:05:55.841997  301306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:55.856597  301306 pause.go:52] kubelet running: true
	I1216 05:05:55.856669  301306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:56.052040  301306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:56.052133  301306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:56.122270  301306 cri.go:89] found id: "7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099"
	I1216 05:05:56.122293  301306 cri.go:89] found id: "6ef37ce405289906c82809f9b2f49a5539a9c80ac13fe66e0ada6ec4a0635e26"
	I1216 05:05:56.122307  301306 cri.go:89] found id: "e6cc99576b73e6e7ecafa1c8ac14d847071aef08b420211f4e47b7a6d58c5570"
	I1216 05:05:56.122311  301306 cri.go:89] found id: "b8bf09036fb2b78accb7cae322e042c174300c1fcf135286f2f06672ac50623b"
	I1216 05:05:56.122313  301306 cri.go:89] found id: "53a75ca180b5e84da8dc5f5902b75fb40c7241e939b25f25a63d651afca47133"
	I1216 05:05:56.122317  301306 cri.go:89] found id: "5aba47586bca5359899a12179f858df845d0a3d7ff35738262380c0dc7df2808"
	I1216 05:05:56.122319  301306 cri.go:89] found id: "9470750a66d86139939d4b618522b12a7fec97282112773930ad51d8f35b0afe"
	I1216 05:05:56.122322  301306 cri.go:89] found id: "aaeef0aba55ec8b43be948ea78142e157863fc7464359991825255d4956d31c7"
	I1216 05:05:56.122340  301306 cri.go:89] found id: "40939f377b2ea50bcacad648ad4f8c9351d77ec9c17f14735f75e1fce438fe46"
	I1216 05:05:56.122353  301306 cri.go:89] found id: "7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c"
	I1216 05:05:56.122361  301306 cri.go:89] found id: "322e8d3048a4f3b13c56188c19486d7ec67867e1670193f355dbd8b3ac3c3f74"
	I1216 05:05:56.122366  301306 cri.go:89] found id: ""
	I1216 05:05:56.122404  301306 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:56.134713  301306 retry.go:31] will retry after 290.77751ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:56Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:56.426267  301306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:56.446678  301306 pause.go:52] kubelet running: false
	I1216 05:05:56.446770  301306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:56.602791  301306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:56.602883  301306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:56.667005  301306 cri.go:89] found id: "7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099"
	I1216 05:05:56.667061  301306 cri.go:89] found id: "6ef37ce405289906c82809f9b2f49a5539a9c80ac13fe66e0ada6ec4a0635e26"
	I1216 05:05:56.667066  301306 cri.go:89] found id: "e6cc99576b73e6e7ecafa1c8ac14d847071aef08b420211f4e47b7a6d58c5570"
	I1216 05:05:56.667069  301306 cri.go:89] found id: "b8bf09036fb2b78accb7cae322e042c174300c1fcf135286f2f06672ac50623b"
	I1216 05:05:56.667072  301306 cri.go:89] found id: "53a75ca180b5e84da8dc5f5902b75fb40c7241e939b25f25a63d651afca47133"
	I1216 05:05:56.667076  301306 cri.go:89] found id: "5aba47586bca5359899a12179f858df845d0a3d7ff35738262380c0dc7df2808"
	I1216 05:05:56.667078  301306 cri.go:89] found id: "9470750a66d86139939d4b618522b12a7fec97282112773930ad51d8f35b0afe"
	I1216 05:05:56.667081  301306 cri.go:89] found id: "aaeef0aba55ec8b43be948ea78142e157863fc7464359991825255d4956d31c7"
	I1216 05:05:56.667084  301306 cri.go:89] found id: "40939f377b2ea50bcacad648ad4f8c9351d77ec9c17f14735f75e1fce438fe46"
	I1216 05:05:56.667100  301306 cri.go:89] found id: "7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c"
	I1216 05:05:56.667106  301306 cri.go:89] found id: "322e8d3048a4f3b13c56188c19486d7ec67867e1670193f355dbd8b3ac3c3f74"
	I1216 05:05:56.667108  301306 cri.go:89] found id: ""
	I1216 05:05:56.667145  301306 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:56.679203  301306 retry.go:31] will retry after 447.272396ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:56Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:05:57.126859  301306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:05:57.140701  301306 pause.go:52] kubelet running: false
	I1216 05:05:57.140757  301306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:05:57.283652  301306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:05:57.283746  301306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:05:57.348662  301306 cri.go:89] found id: "7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099"
	I1216 05:05:57.348683  301306 cri.go:89] found id: "6ef37ce405289906c82809f9b2f49a5539a9c80ac13fe66e0ada6ec4a0635e26"
	I1216 05:05:57.348687  301306 cri.go:89] found id: "e6cc99576b73e6e7ecafa1c8ac14d847071aef08b420211f4e47b7a6d58c5570"
	I1216 05:05:57.348690  301306 cri.go:89] found id: "b8bf09036fb2b78accb7cae322e042c174300c1fcf135286f2f06672ac50623b"
	I1216 05:05:57.348693  301306 cri.go:89] found id: "53a75ca180b5e84da8dc5f5902b75fb40c7241e939b25f25a63d651afca47133"
	I1216 05:05:57.348696  301306 cri.go:89] found id: "5aba47586bca5359899a12179f858df845d0a3d7ff35738262380c0dc7df2808"
	I1216 05:05:57.348698  301306 cri.go:89] found id: "9470750a66d86139939d4b618522b12a7fec97282112773930ad51d8f35b0afe"
	I1216 05:05:57.348701  301306 cri.go:89] found id: "aaeef0aba55ec8b43be948ea78142e157863fc7464359991825255d4956d31c7"
	I1216 05:05:57.348704  301306 cri.go:89] found id: "40939f377b2ea50bcacad648ad4f8c9351d77ec9c17f14735f75e1fce438fe46"
	I1216 05:05:57.348716  301306 cri.go:89] found id: "7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c"
	I1216 05:05:57.348718  301306 cri.go:89] found id: "322e8d3048a4f3b13c56188c19486d7ec67867e1670193f355dbd8b3ac3c3f74"
	I1216 05:05:57.348721  301306 cri.go:89] found id: ""
	I1216 05:05:57.348762  301306 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:05:57.362633  301306 out.go:203] 
	W1216 05:05:57.363720  301306 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:05:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 05:05:57.363742  301306 out.go:285] * 
	* 
	W1216 05:05:57.367958  301306 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:05:57.370033  301306 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-127599 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-127599
helpers_test.go:244: (dbg) docker inspect embed-certs-127599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c",
	        "Created": "2025-12-16T05:03:55.761761294Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:04:59.07009539Z",
	            "FinishedAt": "2025-12-16T05:04:58.095135355Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/hostname",
	        "HostsPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/hosts",
	        "LogPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c-json.log",
	        "Name": "/embed-certs-127599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-127599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-127599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c",
	                "LowerDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-127599",
	                "Source": "/var/lib/docker/volumes/embed-certs-127599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-127599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-127599",
	                "name.minikube.sigs.k8s.io": "embed-certs-127599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c5cd9207de38b3800ed9c1e9aa53e08e0d146771cf6714b5848a89ac204266b0",
	            "SandboxKey": "/var/run/docker/netns/c5cd9207de38",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-127599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdd74630820b3f4e548e9af2d3fc3c1a7861a593288c72638e11f0a9e6621a7b",
	                    "EndpointID": "5f4c9e0be8d97a2aa590a4da2b7ef7d799a8a91e3764a97da4c521bb281f37ac",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "fa:d8:41:d5:fb:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-127599",
	                        "41c3fa4e1ed9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599: exit status 2 (323.232831ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-127599 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-127599 logs -n 25: (1.142373289s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p no-preload-990631 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-127599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │                     │
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p newest-cni-418580 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-418580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ embed-certs-127599 image list --format=json                                                                                                                                                                                                          │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p embed-certs-127599 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:05:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:05:54.903592  300655 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:54.903929  300655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:54.903941  300655 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:54.903948  300655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:54.904244  300655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:54.904859  300655 out.go:368] Setting JSON to false
	I1216 05:05:54.906323  300655 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2906,"bootTime":1765858649,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:05:54.906405  300655 start.go:143] virtualization: kvm guest
	I1216 05:05:54.908316  300655 out.go:179] * [newest-cni-418580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:05:54.909652  300655 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:05:54.909665  300655 notify.go:221] Checking for updates...
	I1216 05:05:54.912467  300655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:05:54.913700  300655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:54.915013  300655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:05:54.916255  300655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:05:54.917636  300655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:05:54.919464  300655 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:54.919998  300655 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:05:54.945201  300655 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:05:54.945319  300655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:55.003285  300655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:05:54.993241238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:55.003385  300655 docker.go:319] overlay module found
	I1216 05:05:55.005991  300655 out.go:179] * Using the docker driver based on existing profile
	I1216 05:05:55.007029  300655 start.go:309] selected driver: docker
	I1216 05:05:55.007052  300655 start.go:927] validating driver "docker" against &{Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:55.007160  300655 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:05:55.007957  300655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:55.071045  300655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:05:55.06110758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:55.071334  300655 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 05:05:55.071358  300655 cni.go:84] Creating CNI manager for ""
	I1216 05:05:55.071412  300655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:55.071451  300655 start.go:353] cluster config:
	{Name:newest-cni-418580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-418580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:05:55.073158  300655 out.go:179] * Starting "newest-cni-418580" primary control-plane node in "newest-cni-418580" cluster
	I1216 05:05:55.074460  300655 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:05:55.075603  300655 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:05:55.076981  300655 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:05:55.077032  300655 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1216 05:05:55.077048  300655 cache.go:65] Caching tarball of preloaded images
	I1216 05:05:55.077061  300655 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:05:55.077143  300655 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:05:55.077161  300655 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 05:05:55.077274  300655 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/newest-cni-418580/config.json ...
	I1216 05:05:55.098853  300655 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:05:55.098873  300655 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:05:55.098895  300655 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:05:55.098931  300655 start.go:360] acquireMachinesLock for newest-cni-418580: {Name:mk4e64c75634b2ffa2d80d18181fef8e4c88f6a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:05:55.099066  300655 start.go:364] duration metric: took 71.585µs to acquireMachinesLock for "newest-cni-418580"
	I1216 05:05:55.099090  300655 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:05:55.099097  300655 fix.go:54] fixHost starting: 
	I1216 05:05:55.099419  300655 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:05:55.118930  300655 fix.go:112] recreateIfNeeded on newest-cni-418580: state=Stopped err=<nil>
	W1216 05:05:55.118966  300655 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 16 05:05:28 embed-certs-127599 crio[568]: time="2025-12-16T05:05:28.45167293Z" level=info msg="Started container" PID=1763 containerID=cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper id=1a57edbe-5cc0-42ea-807c-cd4c53dbe8bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=59ecfb9a604051327473a965cbcd4db0e47e89485744c988a1690c0db4d7e0b9
	Dec 16 05:05:28 embed-certs-127599 crio[568]: time="2025-12-16T05:05:28.520650313Z" level=info msg="Removing container: 6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974" id=6e7bf151-29ce-4f63-8c1e-5a9ab50f6476 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:28 embed-certs-127599 crio[568]: time="2025-12-16T05:05:28.532294359Z" level=info msg="Removed container 6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=6e7bf151-29ce-4f63-8c1e-5a9ab50f6476 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.55244309Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e84d8a16-c123-468e-802d-bb7dd28b3d6a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.553383005Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0a9b7f12-648c-4a9b-a5fc-8670be7b3b89 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.55469486Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=047c56bd-fd2d-4cdc-b146-980446796373 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.554829923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559408166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559631056Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7c7d45c1ee2198f9a96e8b22df384202e411bd0e06c34c9798626b84c0c5020e/merged/etc/passwd: no such file or directory"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559672736Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7c7d45c1ee2198f9a96e8b22df384202e411bd0e06c34c9798626b84c0c5020e/merged/etc/group: no such file or directory"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559980696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.583289267Z" level=info msg="Created container 7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099: kube-system/storage-provisioner/storage-provisioner" id=047c56bd-fd2d-4cdc-b146-980446796373 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.583886235Z" level=info msg="Starting container: 7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099" id=ea9852af-8bf8-4c11-ba3b-de85e685759c name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.585611651Z" level=info msg="Started container" PID=1777 containerID=7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099 description=kube-system/storage-provisioner/storage-provisioner id=ea9852af-8bf8-4c11-ba3b-de85e685759c name=/runtime.v1.RuntimeService/StartContainer sandboxID=77df7736a96b7dbb6cc116326895f7a983f79be612af37bb333b362ee4ece56a
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.393527191Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9b9e6ea0-8881-4c5b-adef-1e1d86a2281b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.394550288Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7fa7191f-4fdc-4517-a12d-9d95e6c883e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.395667724Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=c1e66c62-b7d3-4daf-900e-b2733755dece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.395809395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.404118055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.404834559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.440049768Z" level=info msg="Created container 7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=c1e66c62-b7d3-4daf-900e-b2733755dece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.440709463Z" level=info msg="Starting container: 7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c" id=74cb4b7a-4cbd-4db8-8a0b-ee3e94254688 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.443139775Z" level=info msg="Started container" PID=1812 containerID=7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper id=74cb4b7a-4cbd-4db8-8a0b-ee3e94254688 name=/runtime.v1.RuntimeService/StartContainer sandboxID=59ecfb9a604051327473a965cbcd4db0e47e89485744c988a1690c0db4d7e0b9
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.585976883Z" level=info msg="Removing container: cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840" id=840c9d9d-ef3d-44f7-b04a-de6bc25ac463 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.596226735Z" level=info msg="Removed container cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=840c9d9d-ef3d-44f7-b04a-de6bc25ac463 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7eb81d2f03473       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   59ecfb9a60405       dashboard-metrics-scraper-6ffb444bf9-jfsgd   kubernetes-dashboard
	7a2b53a9daec2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   77df7736a96b7       storage-provisioner                          kube-system
	322e8d3048a4f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   36d507a0900cc       kubernetes-dashboard-855c9754f9-pdn5g        kubernetes-dashboard
	6ef37ce405289       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   e075655d10849       coredns-66bc5c9577-47h64                     kube-system
	c4c5c7dc30b3d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   4674841db7385       busybox                                      default
	e6cc99576b73e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   88c5639fa31cd       kube-proxy-j79cw                             kube-system
	b8bf09036fb2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   77df7736a96b7       storage-provisioner                          kube-system
	53a75ca180b5e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   1fc5bc1048b1e       kindnet-nqzjf                                kube-system
	5aba47586bca5       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           52 seconds ago      Running             kube-scheduler              0                   cee5085ea907c       kube-scheduler-embed-certs-127599            kube-system
	9470750a66d86       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           52 seconds ago      Running             kube-controller-manager     0                   54fe657c8db44       kube-controller-manager-embed-certs-127599   kube-system
	aaeef0aba55ec       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           52 seconds ago      Running             kube-apiserver              0                   45f7e62b79a29       kube-apiserver-embed-certs-127599            kube-system
	40939f377b2ea       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   dd1b817769c6b       etcd-embed-certs-127599                      kube-system
	
	
	==> coredns [6ef37ce405289906c82809f9b2f49a5539a9c80ac13fe66e0ada6ec4a0635e26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52728 - 25892 "HINFO IN 4715941530840362073.3659782952511961378. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022316955s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-127599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-127599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=embed-certs-127599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:04:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-127599
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:05:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-127599
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5d01c184-5d74-40be-b384-da1ebe94d817
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-47h64                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-127599                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-nqzjf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-127599             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-127599    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-j79cw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-127599             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jfsgd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pdn5g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 114s)  kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-127599 event: Registered Node embed-certs-127599 in Controller
	  Normal  NodeReady                91s                  kubelet          Node embed-certs-127599 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node embed-certs-127599 event: Registered Node embed-certs-127599 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [40939f377b2ea50bcacad648ad4f8c9351d77ec9c17f14735f75e1fce438fe46] <==
	{"level":"warn","ts":"2025-12-16T05:05:07.127237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.136689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.147679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.161426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.172363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.183075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.195225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.205394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.218595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.224524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.234574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.244257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.252958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.272010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.281602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.292197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.301717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.311870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.321296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.330421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.342358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.356267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.368607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.383459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.441187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44828","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:05:58 up 48 min,  0 user,  load average: 4.81, 3.13, 2.09
	Linux embed-certs-127599 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [53a75ca180b5e84da8dc5f5902b75fb40c7241e939b25f25a63d651afca47133] <==
	I1216 05:05:08.985001       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:05:08.985631       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1216 05:05:08.985826       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:05:08.985847       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:05:08.985873       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:05:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:05:09.285366       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:05:09.285422       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:05:09.285436       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:05:09.285635       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:05:09.585582       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:05:09.585624       1 metrics.go:72] Registering metrics
	I1216 05:05:09.585709       1 controller.go:711] "Syncing nftables rules"
	I1216 05:05:19.188583       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:19.188669       1 main.go:301] handling current node
	I1216 05:05:29.188656       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:29.188702       1 main.go:301] handling current node
	I1216 05:05:39.188200       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:39.188250       1 main.go:301] handling current node
	I1216 05:05:49.188192       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:49.188240       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aaeef0aba55ec8b43be948ea78142e157863fc7464359991825255d4956d31c7] <==
	I1216 05:05:08.020096       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:05:08.020104       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:05:08.016314       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:05:08.021078       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 05:05:08.021092       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 05:05:08.021171       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:05:08.026832       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1216 05:05:08.028278       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:05:08.035146       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:05:08.039635       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:08.046139       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 05:05:08.046617       1 policy_source.go:240] refreshing policies
	I1216 05:05:08.059732       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:05:08.371695       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:05:08.414980       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:05:08.468402       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:05:08.494059       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:05:08.502150       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:05:08.542203       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.198.243"}
	I1216 05:05:08.553334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.147.137"}
	I1216 05:05:08.922862       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:05:11.373238       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:05:11.721346       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:05:11.822180       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:05:11.822180       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9470750a66d86139939d4b618522b12a7fec97282112773930ad51d8f35b0afe] <==
	I1216 05:05:11.346813       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1216 05:05:11.368291       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:05:11.369319       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 05:05:11.369352       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 05:05:11.369399       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 05:05:11.369411       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 05:05:11.369515       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 05:05:11.370648       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:05:11.370688       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 05:05:11.370827       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 05:05:11.371225       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 05:05:11.373629       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 05:05:11.373733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:05:11.374108       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 05:05:11.375487       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 05:05:11.375542       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:05:11.375584       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:05:11.375606       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:05:11.375611       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:05:11.375638       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:05:11.375775       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 05:05:11.378422       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 05:05:11.382632       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 05:05:11.386938       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 05:05:11.392169       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e6cc99576b73e6e7ecafa1c8ac14d847071aef08b420211f4e47b7a6d58c5570] <==
	I1216 05:05:08.811964       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:05:08.891186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:05:08.991327       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:05:08.991454       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1216 05:05:08.991574       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:05:09.015241       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:05:09.015641       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:05:09.023761       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:05:09.024353       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:05:09.024387       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:09.026655       1 config.go:200] "Starting service config controller"
	I1216 05:05:09.026679       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:05:09.026711       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:05:09.026737       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:05:09.026731       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:05:09.026742       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:05:09.029496       1 config.go:309] "Starting node config controller"
	I1216 05:05:09.029522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:05:09.126852       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:05:09.126875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:05:09.126852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:05:09.129848       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5aba47586bca5359899a12179f858df845d0a3d7ff35738262380c0dc7df2808] <==
	I1216 05:05:07.758750       1 serving.go:386] Generated self-signed cert in-memory
	I1216 05:05:08.208534       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:05:08.208625       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:08.214196       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 05:05:08.214580       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:05:08.214589       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 05:05:08.214634       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:08.214648       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:08.214550       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:05:08.214716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:05:08.214728       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:05:08.314852       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:05:08.314953       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:08.315001       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 16 05:05:11 embed-certs-127599 kubelet[725]: I1216 05:05:11.989414     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwnp\" (UniqueName: \"kubernetes.io/projected/97197d99-258a-4a84-bfe5-e90ddee6931a-kube-api-access-5gwnp\") pod \"dashboard-metrics-scraper-6ffb444bf9-jfsgd\" (UID: \"97197d99-258a-4a84-bfe5-e90ddee6931a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd"
	Dec 16 05:05:14 embed-certs-127599 kubelet[725]: I1216 05:05:14.475885     725 scope.go:117] "RemoveContainer" containerID="b44b4798055b02d1362296367d18e048f1bf71b65931b6e8cdd2a3a90ce9ac12"
	Dec 16 05:05:15 embed-certs-127599 kubelet[725]: I1216 05:05:15.481121     725 scope.go:117] "RemoveContainer" containerID="b44b4798055b02d1362296367d18e048f1bf71b65931b6e8cdd2a3a90ce9ac12"
	Dec 16 05:05:15 embed-certs-127599 kubelet[725]: I1216 05:05:15.481321     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:15 embed-certs-127599 kubelet[725]: E1216 05:05:15.481521     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:16 embed-certs-127599 kubelet[725]: I1216 05:05:16.487142     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:16 embed-certs-127599 kubelet[725]: E1216 05:05:16.487818     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:17 embed-certs-127599 kubelet[725]: I1216 05:05:17.489432     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:17 embed-certs-127599 kubelet[725]: E1216 05:05:17.489593     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:18 embed-certs-127599 kubelet[725]: I1216 05:05:18.503437     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pdn5g" podStartSLOduration=1.779148314 podStartE2EDuration="7.503411602s" podCreationTimestamp="2025-12-16 05:05:11 +0000 UTC" firstStartedPulling="2025-12-16 05:05:12.275343388 +0000 UTC m=+7.023871898" lastFinishedPulling="2025-12-16 05:05:17.999606686 +0000 UTC m=+12.748135186" observedRunningTime="2025-12-16 05:05:18.503013839 +0000 UTC m=+13.251542358" watchObservedRunningTime="2025-12-16 05:05:18.503411602 +0000 UTC m=+13.251940123"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: I1216 05:05:28.392870     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: I1216 05:05:28.519243     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: I1216 05:05:28.519475     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: E1216 05:05:28.519680     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:37 embed-certs-127599 kubelet[725]: I1216 05:05:37.151049     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:37 embed-certs-127599 kubelet[725]: E1216 05:05:37.151295     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:39 embed-certs-127599 kubelet[725]: I1216 05:05:39.551959     725 scope.go:117] "RemoveContainer" containerID="b8bf09036fb2b78accb7cae322e042c174300c1fcf135286f2f06672ac50623b"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: I1216 05:05:50.392970     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: I1216 05:05:50.584738     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: I1216 05:05:50.585051     725 scope.go:117] "RemoveContainer" containerID="7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: E1216 05:05:50.585306     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: kubelet.service: Consumed 1.686s CPU time.
	
	
	==> kubernetes-dashboard [322e8d3048a4f3b13c56188c19486d7ec67867e1670193f355dbd8b3ac3c3f74] <==
	2025/12/16 05:05:18 Using namespace: kubernetes-dashboard
	2025/12/16 05:05:18 Using in-cluster config to connect to apiserver
	2025/12/16 05:05:18 Using secret token for csrf signing
	2025/12/16 05:05:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:05:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:05:18 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 05:05:18 Generating JWE encryption key
	2025/12/16 05:05:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:05:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:05:18 Initializing JWE encryption key from synchronized object
	2025/12/16 05:05:18 Creating in-cluster Sidecar client
	2025/12/16 05:05:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:18 Serving insecurely on HTTP port: 9090
	2025/12/16 05:05:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:18 Starting overwatch
	
	
	==> storage-provisioner [7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099] <==
	I1216 05:05:39.597496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:05:39.604262       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:05:39.604305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:05:39.606039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:43.061186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:47.322178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:50.920668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:53.974406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:56.997146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:57.001156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:57.001279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:05:57.001336       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"063e8c72-eccb-4343-aa0d-2dc37c13d7d3", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-127599_21516f2d-2ebc-4dd0-9c06-7a6b72fda7dd became leader
	I1216 05:05:57.001401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-127599_21516f2d-2ebc-4dd0-9c06-7a6b72fda7dd!
	W1216 05:05:57.003278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:57.006573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:57.101632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-127599_21516f2d-2ebc-4dd0-9c06-7a6b72fda7dd!
	
	
	==> storage-provisioner [b8bf09036fb2b78accb7cae322e042c174300c1fcf135286f2f06672ac50623b] <==
	I1216 05:05:08.767900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:05:38.775416       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-127599 -n embed-certs-127599
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-127599 -n embed-certs-127599: exit status 2 (370.207998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-127599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-127599
helpers_test.go:244: (dbg) docker inspect embed-certs-127599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c",
	        "Created": "2025-12-16T05:03:55.761761294Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:04:59.07009539Z",
	            "FinishedAt": "2025-12-16T05:04:58.095135355Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/hostname",
	        "HostsPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/hosts",
	        "LogPath": "/var/lib/docker/containers/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c/41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c-json.log",
	        "Name": "/embed-certs-127599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-127599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-127599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41c3fa4e1ed941edfe52714e80a2e2a6ce0a9736a785af7b796ec24428565d4c",
	                "LowerDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fad62424ac4a0137ddc30f346ce9307e1028d4d804a3a3d1049bea8f0179f9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-127599",
	                "Source": "/var/lib/docker/volumes/embed-certs-127599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-127599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-127599",
	                "name.minikube.sigs.k8s.io": "embed-certs-127599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c5cd9207de38b3800ed9c1e9aa53e08e0d146771cf6714b5848a89ac204266b0",
	            "SandboxKey": "/var/run/docker/netns/c5cd9207de38",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-127599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdd74630820b3f4e548e9af2d3fc3c1a7861a593288c72638e11f0a9e6621a7b",
	                    "EndpointID": "5f4c9e0be8d97a2aa590a4da2b7ef7d799a8a91e3764a97da4c521bb281f37ac",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "fa:d8:41:d5:fb:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-127599",
	                        "41c3fa4e1ed9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599: exit status 2 (370.42896ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-127599 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-127599 logs -n 25: (1.771899404s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p embed-certs-127599 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ addons  │ enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:04 UTC │
	│ start   │ -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:04 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p newest-cni-418580 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-418580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ embed-certs-127599 image list --format=json                                                                                                                                                                                                          │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p embed-certs-127599 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p auto-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-548714                  │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:05:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:05:59.010658  302822 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:05:59.011043  302822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:59.011059  302822 out.go:374] Setting ErrFile to fd 2...
	I1216 05:05:59.011067  302822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:05:59.011389  302822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:05:59.012059  302822 out.go:368] Setting JSON to false
	I1216 05:05:59.013658  302822 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2910,"bootTime":1765858649,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:05:59.013742  302822 start.go:143] virtualization: kvm guest
	I1216 05:05:59.018118  302822 out.go:179] * [auto-548714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:05:59.019432  302822 notify.go:221] Checking for updates...
	I1216 05:05:59.019440  302822 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:05:59.020718  302822 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:05:59.022007  302822 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:05:59.023184  302822 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:05:59.024299  302822 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:05:59.025532  302822 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:05:59.027841  302822 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:59.027950  302822 config.go:182] Loaded profile config "embed-certs-127599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:05:59.028123  302822 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:05:59.028248  302822 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:05:59.055256  302822 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:05:59.055400  302822 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:59.110999  302822 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:05:59.101505041 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:59.111183  302822 docker.go:319] overlay module found
	I1216 05:05:59.113049  302822 out.go:179] * Using the docker driver based on user configuration
	I1216 05:05:59.114303  302822 start.go:309] selected driver: docker
	I1216 05:05:59.114320  302822 start.go:927] validating driver "docker" against <nil>
	I1216 05:05:59.114334  302822 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:05:59.115168  302822 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:05:59.190393  302822 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-16 05:05:59.179126303 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:05:59.190607  302822 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:05:59.190954  302822 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:05:59.193037  302822 out.go:179] * Using Docker driver with root privileges
	I1216 05:05:59.194265  302822 cni.go:84] Creating CNI manager for ""
	I1216 05:05:59.194336  302822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:05:59.194350  302822 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:05:59.194515  302822 start.go:353] cluster config:
	{Name:auto-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1216 05:05:59.196039  302822 out.go:179] * Starting "auto-548714" primary control-plane node in "auto-548714" cluster
	I1216 05:05:59.197260  302822 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:05:59.198480  302822 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:05:59.199495  302822 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:05:59.199524  302822 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:05:59.199531  302822 cache.go:65] Caching tarball of preloaded images
	I1216 05:05:59.199591  302822 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:05:59.199633  302822 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:05:59.199648  302822 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:05:59.199765  302822 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/config.json ...
	I1216 05:05:59.199792  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/config.json: {Name:mk05feba4a7dfc6f9dcfa696ea334b687ae84cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:05:59.222167  302822 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:05:59.222185  302822 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:05:59.222205  302822 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:05:59.222239  302822 start.go:360] acquireMachinesLock for auto-548714: {Name:mk48775fb718c9adc557bda2e813c33ec0eac11c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:05:59.222346  302822 start.go:364] duration metric: took 86.153µs to acquireMachinesLock for "auto-548714"
	I1216 05:05:59.222373  302822 start.go:93] Provisioning new machine with config: &{Name:auto-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:05:59.222448  302822 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 16 05:05:28 embed-certs-127599 crio[568]: time="2025-12-16T05:05:28.45167293Z" level=info msg="Started container" PID=1763 containerID=cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper id=1a57edbe-5cc0-42ea-807c-cd4c53dbe8bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=59ecfb9a604051327473a965cbcd4db0e47e89485744c988a1690c0db4d7e0b9
	Dec 16 05:05:28 embed-certs-127599 crio[568]: time="2025-12-16T05:05:28.520650313Z" level=info msg="Removing container: 6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974" id=6e7bf151-29ce-4f63-8c1e-5a9ab50f6476 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:28 embed-certs-127599 crio[568]: time="2025-12-16T05:05:28.532294359Z" level=info msg="Removed container 6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=6e7bf151-29ce-4f63-8c1e-5a9ab50f6476 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.55244309Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e84d8a16-c123-468e-802d-bb7dd28b3d6a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.553383005Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0a9b7f12-648c-4a9b-a5fc-8670be7b3b89 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.55469486Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=047c56bd-fd2d-4cdc-b146-980446796373 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.554829923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559408166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559631056Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7c7d45c1ee2198f9a96e8b22df384202e411bd0e06c34c9798626b84c0c5020e/merged/etc/passwd: no such file or directory"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559672736Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7c7d45c1ee2198f9a96e8b22df384202e411bd0e06c34c9798626b84c0c5020e/merged/etc/group: no such file or directory"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.559980696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.583289267Z" level=info msg="Created container 7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099: kube-system/storage-provisioner/storage-provisioner" id=047c56bd-fd2d-4cdc-b146-980446796373 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.583886235Z" level=info msg="Starting container: 7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099" id=ea9852af-8bf8-4c11-ba3b-de85e685759c name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:39 embed-certs-127599 crio[568]: time="2025-12-16T05:05:39.585611651Z" level=info msg="Started container" PID=1777 containerID=7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099 description=kube-system/storage-provisioner/storage-provisioner id=ea9852af-8bf8-4c11-ba3b-de85e685759c name=/runtime.v1.RuntimeService/StartContainer sandboxID=77df7736a96b7dbb6cc116326895f7a983f79be612af37bb333b362ee4ece56a
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.393527191Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9b9e6ea0-8881-4c5b-adef-1e1d86a2281b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.394550288Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7fa7191f-4fdc-4517-a12d-9d95e6c883e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.395667724Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=c1e66c62-b7d3-4daf-900e-b2733755dece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.395809395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.404118055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.404834559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.440049768Z" level=info msg="Created container 7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=c1e66c62-b7d3-4daf-900e-b2733755dece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.440709463Z" level=info msg="Starting container: 7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c" id=74cb4b7a-4cbd-4db8-8a0b-ee3e94254688 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.443139775Z" level=info msg="Started container" PID=1812 containerID=7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper id=74cb4b7a-4cbd-4db8-8a0b-ee3e94254688 name=/runtime.v1.RuntimeService/StartContainer sandboxID=59ecfb9a604051327473a965cbcd4db0e47e89485744c988a1690c0db4d7e0b9
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.585976883Z" level=info msg="Removing container: cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840" id=840c9d9d-ef3d-44f7-b04a-de6bc25ac463 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:05:50 embed-certs-127599 crio[568]: time="2025-12-16T05:05:50.596226735Z" level=info msg="Removed container cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd/dashboard-metrics-scraper" id=840c9d9d-ef3d-44f7-b04a-de6bc25ac463 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7eb81d2f03473       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   59ecfb9a60405       dashboard-metrics-scraper-6ffb444bf9-jfsgd   kubernetes-dashboard
	7a2b53a9daec2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   77df7736a96b7       storage-provisioner                          kube-system
	322e8d3048a4f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   36d507a0900cc       kubernetes-dashboard-855c9754f9-pdn5g        kubernetes-dashboard
	6ef37ce405289       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   e075655d10849       coredns-66bc5c9577-47h64                     kube-system
	c4c5c7dc30b3d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   4674841db7385       busybox                                      default
	e6cc99576b73e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   88c5639fa31cd       kube-proxy-j79cw                             kube-system
	b8bf09036fb2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   77df7736a96b7       storage-provisioner                          kube-system
	53a75ca180b5e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   1fc5bc1048b1e       kindnet-nqzjf                                kube-system
	5aba47586bca5       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   cee5085ea907c       kube-scheduler-embed-certs-127599            kube-system
	9470750a66d86       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   54fe657c8db44       kube-controller-manager-embed-certs-127599   kube-system
	aaeef0aba55ec       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   45f7e62b79a29       kube-apiserver-embed-certs-127599            kube-system
	40939f377b2ea       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   dd1b817769c6b       etcd-embed-certs-127599                      kube-system
	
	
	==> coredns [6ef37ce405289906c82809f9b2f49a5539a9c80ac13fe66e0ada6ec4a0635e26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52728 - 25892 "HINFO IN 4715941530840362073.3659782952511961378. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022316955s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-127599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-127599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=embed-certs-127599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:04:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-127599
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:05:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:05:48 +0000   Tue, 16 Dec 2025 05:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-127599
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5d01c184-5d74-40be-b384-da1ebe94d817
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-47h64                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-127599                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-nqzjf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-127599             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-127599    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-j79cw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-127599             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jfsgd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pdn5g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-127599 event: Registered Node embed-certs-127599 in Controller
	  Normal  NodeReady                93s                  kubelet          Node embed-certs-127599 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-127599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-127599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-127599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node embed-certs-127599 event: Registered Node embed-certs-127599 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [40939f377b2ea50bcacad648ad4f8c9351d77ec9c17f14735f75e1fce438fe46] <==
	{"level":"warn","ts":"2025-12-16T05:05:07.127237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.136689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.147679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.161426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.172363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.183075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.195225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.205394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.218595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.224524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.234574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.244257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.252958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.272010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.281602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.292197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.301717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.311870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.321296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.330421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.342358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.356267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.368607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.383459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:07.441187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44828","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:06:00 up 48 min,  0 user,  load average: 4.83, 3.16, 2.11
	Linux embed-certs-127599 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [53a75ca180b5e84da8dc5f5902b75fb40c7241e939b25f25a63d651afca47133] <==
	I1216 05:05:08.985001       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:05:08.985631       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1216 05:05:08.985826       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:05:08.985847       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:05:08.985873       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:05:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:05:09.285366       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:05:09.285422       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:05:09.285436       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:05:09.285635       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:05:09.585582       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:05:09.585624       1 metrics.go:72] Registering metrics
	I1216 05:05:09.585709       1 controller.go:711] "Syncing nftables rules"
	I1216 05:05:19.188583       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:19.188669       1 main.go:301] handling current node
	I1216 05:05:29.188656       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:29.188702       1 main.go:301] handling current node
	I1216 05:05:39.188200       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:39.188250       1 main.go:301] handling current node
	I1216 05:05:49.188192       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:49.188240       1 main.go:301] handling current node
	I1216 05:05:59.195135       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1216 05:05:59.195174       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aaeef0aba55ec8b43be948ea78142e157863fc7464359991825255d4956d31c7] <==
	I1216 05:05:08.020096       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:05:08.020104       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:05:08.016314       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:05:08.021078       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 05:05:08.021092       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 05:05:08.021171       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:05:08.026832       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1216 05:05:08.028278       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:05:08.035146       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:05:08.039635       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:08.046139       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 05:05:08.046617       1 policy_source.go:240] refreshing policies
	I1216 05:05:08.059732       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:05:08.371695       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:05:08.414980       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:05:08.468402       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:05:08.494059       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:05:08.502150       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:05:08.542203       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.198.243"}
	I1216 05:05:08.553334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.147.137"}
	I1216 05:05:08.922862       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:05:11.373238       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:05:11.721346       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:05:11.822180       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:05:11.822180       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9470750a66d86139939d4b618522b12a7fec97282112773930ad51d8f35b0afe] <==
	I1216 05:05:11.346813       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1216 05:05:11.368291       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:05:11.369319       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 05:05:11.369352       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 05:05:11.369399       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 05:05:11.369411       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 05:05:11.369515       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 05:05:11.370648       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:05:11.370688       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 05:05:11.370827       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 05:05:11.371225       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 05:05:11.373629       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 05:05:11.373733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:05:11.374108       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 05:05:11.375487       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 05:05:11.375542       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:05:11.375584       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:05:11.375606       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:05:11.375611       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:05:11.375638       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:05:11.375775       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 05:05:11.378422       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 05:05:11.382632       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 05:05:11.386938       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 05:05:11.392169       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e6cc99576b73e6e7ecafa1c8ac14d847071aef08b420211f4e47b7a6d58c5570] <==
	I1216 05:05:08.811964       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:05:08.891186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:05:08.991327       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:05:08.991454       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1216 05:05:08.991574       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:05:09.015241       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:05:09.015641       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:05:09.023761       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:05:09.024353       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:05:09.024387       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:09.026655       1 config.go:200] "Starting service config controller"
	I1216 05:05:09.026679       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:05:09.026711       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:05:09.026737       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:05:09.026731       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:05:09.026742       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:05:09.029496       1 config.go:309] "Starting node config controller"
	I1216 05:05:09.029522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:05:09.126852       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:05:09.126875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:05:09.126852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:05:09.129848       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5aba47586bca5359899a12179f858df845d0a3d7ff35738262380c0dc7df2808] <==
	I1216 05:05:07.758750       1 serving.go:386] Generated self-signed cert in-memory
	I1216 05:05:08.208534       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:05:08.208625       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:08.214196       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 05:05:08.214580       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:05:08.214589       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 05:05:08.214634       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:08.214648       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:08.214550       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:05:08.214716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:05:08.214728       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:05:08.314852       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:05:08.314953       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:08.315001       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 16 05:05:11 embed-certs-127599 kubelet[725]: I1216 05:05:11.989414     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwnp\" (UniqueName: \"kubernetes.io/projected/97197d99-258a-4a84-bfe5-e90ddee6931a-kube-api-access-5gwnp\") pod \"dashboard-metrics-scraper-6ffb444bf9-jfsgd\" (UID: \"97197d99-258a-4a84-bfe5-e90ddee6931a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd"
	Dec 16 05:05:14 embed-certs-127599 kubelet[725]: I1216 05:05:14.475885     725 scope.go:117] "RemoveContainer" containerID="b44b4798055b02d1362296367d18e048f1bf71b65931b6e8cdd2a3a90ce9ac12"
	Dec 16 05:05:15 embed-certs-127599 kubelet[725]: I1216 05:05:15.481121     725 scope.go:117] "RemoveContainer" containerID="b44b4798055b02d1362296367d18e048f1bf71b65931b6e8cdd2a3a90ce9ac12"
	Dec 16 05:05:15 embed-certs-127599 kubelet[725]: I1216 05:05:15.481321     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:15 embed-certs-127599 kubelet[725]: E1216 05:05:15.481521     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:16 embed-certs-127599 kubelet[725]: I1216 05:05:16.487142     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:16 embed-certs-127599 kubelet[725]: E1216 05:05:16.487818     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:17 embed-certs-127599 kubelet[725]: I1216 05:05:17.489432     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:17 embed-certs-127599 kubelet[725]: E1216 05:05:17.489593     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:18 embed-certs-127599 kubelet[725]: I1216 05:05:18.503437     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pdn5g" podStartSLOduration=1.779148314 podStartE2EDuration="7.503411602s" podCreationTimestamp="2025-12-16 05:05:11 +0000 UTC" firstStartedPulling="2025-12-16 05:05:12.275343388 +0000 UTC m=+7.023871898" lastFinishedPulling="2025-12-16 05:05:17.999606686 +0000 UTC m=+12.748135186" observedRunningTime="2025-12-16 05:05:18.503013839 +0000 UTC m=+13.251542358" watchObservedRunningTime="2025-12-16 05:05:18.503411602 +0000 UTC m=+13.251940123"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: I1216 05:05:28.392870     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: I1216 05:05:28.519243     725 scope.go:117] "RemoveContainer" containerID="6affa6eff3fddab3a184528f085361189a1f96e6b74e991eb40918d79219b974"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: I1216 05:05:28.519475     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:28 embed-certs-127599 kubelet[725]: E1216 05:05:28.519680     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:37 embed-certs-127599 kubelet[725]: I1216 05:05:37.151049     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:37 embed-certs-127599 kubelet[725]: E1216 05:05:37.151295     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:39 embed-certs-127599 kubelet[725]: I1216 05:05:39.551959     725 scope.go:117] "RemoveContainer" containerID="b8bf09036fb2b78accb7cae322e042c174300c1fcf135286f2f06672ac50623b"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: I1216 05:05:50.392970     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: I1216 05:05:50.584738     725 scope.go:117] "RemoveContainer" containerID="cf64e59f3f39b4756bd04f739f79862ea8eb905a5e9b44cedf0d1fe6940d6840"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: I1216 05:05:50.585051     725 scope.go:117] "RemoveContainer" containerID="7eb81d2f03473dd2d1dd4a72d868edaf7f2ba0ebde8c06bacc7fa434b7bd3c2c"
	Dec 16 05:05:50 embed-certs-127599 kubelet[725]: E1216 05:05:50.585306     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jfsgd_kubernetes-dashboard(97197d99-258a-4a84-bfe5-e90ddee6931a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jfsgd" podUID="97197d99-258a-4a84-bfe5-e90ddee6931a"
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:05:56 embed-certs-127599 systemd[1]: kubelet.service: Consumed 1.686s CPU time.
	
	
	==> kubernetes-dashboard [322e8d3048a4f3b13c56188c19486d7ec67867e1670193f355dbd8b3ac3c3f74] <==
	2025/12/16 05:05:18 Using namespace: kubernetes-dashboard
	2025/12/16 05:05:18 Using in-cluster config to connect to apiserver
	2025/12/16 05:05:18 Using secret token for csrf signing
	2025/12/16 05:05:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:05:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:05:18 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 05:05:18 Generating JWE encryption key
	2025/12/16 05:05:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:05:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:05:18 Initializing JWE encryption key from synchronized object
	2025/12/16 05:05:18 Creating in-cluster Sidecar client
	2025/12/16 05:05:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:18 Serving insecurely on HTTP port: 9090
	2025/12/16 05:05:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:18 Starting overwatch
	
	
	==> storage-provisioner [7a2b53a9daec21267b681ae6e8fd3bb9bf572eb88e20cc1c810f5343770d4099] <==
	I1216 05:05:39.597496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:05:39.604262       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:05:39.604305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:05:39.606039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:43.061186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:47.322178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:50.920668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:53.974406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:56.997146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:57.001156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:57.001279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:05:57.001336       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"063e8c72-eccb-4343-aa0d-2dc37c13d7d3", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-127599_21516f2d-2ebc-4dd0-9c06-7a6b72fda7dd became leader
	I1216 05:05:57.001401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-127599_21516f2d-2ebc-4dd0-9c06-7a6b72fda7dd!
	W1216 05:05:57.003278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:57.006573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:05:57.101632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-127599_21516f2d-2ebc-4dd0-9c06-7a6b72fda7dd!
	W1216 05:05:59.009772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:05:59.018323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b8bf09036fb2b78accb7cae322e042c174300c1fcf135286f2f06672ac50623b] <==
	I1216 05:05:08.767900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:05:38.775416       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-127599 -n embed-certs-127599
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-127599 -n embed-certs-127599: exit status 2 (346.756313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-127599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-418580 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-418580 --alsologtostderr -v=1: exit status 80 (2.623297567s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-418580 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:06:07.606836  307578 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:06:07.606982  307578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:07.606997  307578 out.go:374] Setting ErrFile to fd 2...
	I1216 05:06:07.607004  307578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:07.607318  307578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:06:07.607640  307578 out.go:368] Setting JSON to false
	I1216 05:06:07.607666  307578 mustload.go:66] Loading cluster: newest-cni-418580
	I1216 05:06:07.608191  307578 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:06:07.608737  307578 cli_runner.go:164] Run: docker container inspect newest-cni-418580 --format={{.State.Status}}
	I1216 05:06:07.631937  307578 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:06:07.632359  307578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:07.707985  307578 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:92 SystemTime:2025-12-16 05:06:07.695722811 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:07.708962  307578 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-418580 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 05:06:07.712082  307578 out.go:179] * Pausing node newest-cni-418580 ... 
	I1216 05:06:07.713246  307578 host.go:66] Checking if "newest-cni-418580" exists ...
	I1216 05:06:07.713545  307578 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:07.713596  307578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-418580
	I1216 05:06:07.732683  307578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/newest-cni-418580/id_rsa Username:docker}
	I1216 05:06:07.827715  307578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:07.843462  307578 pause.go:52] kubelet running: true
	I1216 05:06:07.843540  307578 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:08.013416  307578 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:08.013513  307578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:08.096531  307578 cri.go:89] found id: "c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696"
	I1216 05:06:08.096557  307578 cri.go:89] found id: "861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c"
	I1216 05:06:08.096564  307578 cri.go:89] found id: "792381e45950963b0ae8db2f5ad977f211daaf0810b7a93dd2d909444e1821c4"
	I1216 05:06:08.096569  307578 cri.go:89] found id: "5aa99fc300b28206b76480bb951b5c8dfa804ac51633f6fe3b8e67aa1be01b31"
	I1216 05:06:08.096575  307578 cri.go:89] found id: "6c35f39d09b5d69eece5daa0f5057bb2198649a6a6ddb2a863f25c31e0a13ce2"
	I1216 05:06:08.096581  307578 cri.go:89] found id: "31d4485dc7982aced65031b666c86beb9beea7f0add39a846111c271e1b90ccf"
	I1216 05:06:08.096586  307578 cri.go:89] found id: ""
	I1216 05:06:08.096635  307578 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:08.110101  307578 retry.go:31] will retry after 151.919694ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:08Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:06:08.262525  307578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:08.276362  307578 pause.go:52] kubelet running: false
	I1216 05:06:08.276417  307578 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:08.422931  307578 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:08.423053  307578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:08.498510  307578 cri.go:89] found id: "c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696"
	I1216 05:06:08.498532  307578 cri.go:89] found id: "861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c"
	I1216 05:06:08.498538  307578 cri.go:89] found id: "792381e45950963b0ae8db2f5ad977f211daaf0810b7a93dd2d909444e1821c4"
	I1216 05:06:08.498543  307578 cri.go:89] found id: "5aa99fc300b28206b76480bb951b5c8dfa804ac51633f6fe3b8e67aa1be01b31"
	I1216 05:06:08.498547  307578 cri.go:89] found id: "6c35f39d09b5d69eece5daa0f5057bb2198649a6a6ddb2a863f25c31e0a13ce2"
	I1216 05:06:08.498552  307578 cri.go:89] found id: "31d4485dc7982aced65031b666c86beb9beea7f0add39a846111c271e1b90ccf"
	I1216 05:06:08.498556  307578 cri.go:89] found id: ""
	I1216 05:06:08.498602  307578 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:08.510431  307578 retry.go:31] will retry after 226.243213ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:08Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:06:08.737739  307578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:08.752134  307578 pause.go:52] kubelet running: false
	I1216 05:06:08.752206  307578 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:08.890176  307578 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:08.890252  307578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:08.959339  307578 cri.go:89] found id: "c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696"
	I1216 05:06:08.959364  307578 cri.go:89] found id: "861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c"
	I1216 05:06:08.959371  307578 cri.go:89] found id: "792381e45950963b0ae8db2f5ad977f211daaf0810b7a93dd2d909444e1821c4"
	I1216 05:06:08.959376  307578 cri.go:89] found id: "5aa99fc300b28206b76480bb951b5c8dfa804ac51633f6fe3b8e67aa1be01b31"
	I1216 05:06:08.959380  307578 cri.go:89] found id: "6c35f39d09b5d69eece5daa0f5057bb2198649a6a6ddb2a863f25c31e0a13ce2"
	I1216 05:06:08.959389  307578 cri.go:89] found id: "31d4485dc7982aced65031b666c86beb9beea7f0add39a846111c271e1b90ccf"
	I1216 05:06:08.959407  307578 cri.go:89] found id: ""
	I1216 05:06:08.959456  307578 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:08.972092  307578 retry.go:31] will retry after 636.148764ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:08Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:06:09.608962  307578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:09.624633  307578 pause.go:52] kubelet running: false
	I1216 05:06:09.624701  307578 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:09.744325  307578 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:09.744406  307578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:09.812057  307578 cri.go:89] found id: "c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696"
	I1216 05:06:09.812085  307578 cri.go:89] found id: "861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c"
	I1216 05:06:09.812090  307578 cri.go:89] found id: "792381e45950963b0ae8db2f5ad977f211daaf0810b7a93dd2d909444e1821c4"
	I1216 05:06:09.812094  307578 cri.go:89] found id: "5aa99fc300b28206b76480bb951b5c8dfa804ac51633f6fe3b8e67aa1be01b31"
	I1216 05:06:09.812097  307578 cri.go:89] found id: "6c35f39d09b5d69eece5daa0f5057bb2198649a6a6ddb2a863f25c31e0a13ce2"
	I1216 05:06:09.812101  307578 cri.go:89] found id: "31d4485dc7982aced65031b666c86beb9beea7f0add39a846111c271e1b90ccf"
	I1216 05:06:09.812103  307578 cri.go:89] found id: ""
	I1216 05:06:09.812152  307578 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:09.865644  307578 out.go:203] 
	W1216 05:06:10.005727  307578 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 05:06:10.005771  307578 out.go:285] * 
	* 
	W1216 05:06:10.011061  307578 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:06:10.028770  307578 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-418580 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-418580
helpers_test.go:244: (dbg) docker inspect newest-cni-418580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1",
	        "Created": "2025-12-16T05:05:28.422423027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:05:55.147534054Z",
	            "FinishedAt": "2025-12-16T05:05:54.231520148Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/hosts",
	        "LogPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1-json.log",
	        "Name": "/newest-cni-418580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-418580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-418580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1",
	                "LowerDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-418580",
	                "Source": "/var/lib/docker/volumes/newest-cni-418580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-418580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-418580",
	                "name.minikube.sigs.k8s.io": "newest-cni-418580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "721b3ecee6376869daa6214cdeb5eb180664e052bba1252dee9831f03f0f8d1a",
	            "SandboxKey": "/var/run/docker/netns/721b3ecee637",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-418580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "401a0a2302958aa43185772bb4bb58f2878244e830bb4ed218c958e2da9161d5",
	                    "EndpointID": "330f1cdf085b1d66245d40aeaf59d56e66f1c595ef2b7727f56d55e4e92dbf92",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "3a:9c:46:e2:07:44",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-418580",
	                        "7ec1bd166949"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580: exit status 2 (341.487952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-418580 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-418580 logs -n 25: (1.121516792s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p newest-cni-418580 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-418580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:06 UTC │
	│ image   │ embed-certs-127599 image list --format=json                                                                                                                                                                                                          │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p embed-certs-127599 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p auto-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-548714                  │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ start   │ -p kindnet-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-548714               │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ image   │ newest-cni-418580 image list --format=json                                                                                                                                                                                                           │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ pause   │ -p newest-cni-418580 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:06:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:06:05.454374  306433 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:06:05.454495  306433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:05.454508  306433 out.go:374] Setting ErrFile to fd 2...
	I1216 05:06:05.454517  306433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:05.454795  306433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:06:05.455557  306433 out.go:368] Setting JSON to false
	I1216 05:06:05.460960  306433 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2916,"bootTime":1765858649,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:06:05.461086  306433 start.go:143] virtualization: kvm guest
	I1216 05:06:05.464428  306433 out.go:179] * [kindnet-548714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:06:05.465939  306433 notify.go:221] Checking for updates...
	I1216 05:06:05.466006  306433 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:06:05.469753  306433 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:06:05.471043  306433 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:06:05.472488  306433 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:06:05.473679  306433 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:06:05.475041  306433 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:06:05.476715  306433 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:05.476862  306433 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:05.477010  306433 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:06:05.477132  306433 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:06:05.508594  306433 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:06:05.508781  306433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:05.597858  306433 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:05.577806462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:05.599131  306433 docker.go:319] overlay module found
	I1216 05:06:05.600990  306433 out.go:179] * Using the docker driver based on user configuration
	I1216 05:06:05.602241  306433 start.go:309] selected driver: docker
	I1216 05:06:05.602257  306433 start.go:927] validating driver "docker" against <nil>
	I1216 05:06:05.602272  306433 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:06:05.603084  306433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:05.693222  306433 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:05.679115537 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:05.693447  306433 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:06:05.693746  306433 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:06:05.694993  306433 out.go:179] * Using Docker driver with root privileges
	I1216 05:06:05.696098  306433 cni.go:84] Creating CNI manager for "kindnet"
	I1216 05:06:05.696122  306433 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:06:05.696215  306433 start.go:353] cluster config:
	{Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:05.697483  306433 out.go:179] * Starting "kindnet-548714" primary control-plane node in "kindnet-548714" cluster
	I1216 05:06:05.698792  306433 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:06:05.700535  306433 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:06:05.702191  306433 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:05.702392  306433 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:06:05.702405  306433 cache.go:65] Caching tarball of preloaded images
	I1216 05:06:05.702503  306433 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:06:05.702513  306433 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:06:05.702628  306433 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/config.json ...
	I1216 05:06:05.702648  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/config.json: {Name:mk4bfd6285be932047085ce22fb0ce76bf268c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:05.702275  306433 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:06:05.737667  306433 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:06:05.737691  306433 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:06:05.737711  306433 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:06:05.737750  306433 start.go:360] acquireMachinesLock for kindnet-548714: {Name:mke5003da2d1236e174e843cc78b8a9282861f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:06:05.737861  306433 start.go:364] duration metric: took 90.645µs to acquireMachinesLock for "kindnet-548714"
	I1216 05:06:05.737890  306433 start.go:93] Provisioning new machine with config: &{Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:06:05.737972  306433 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:06:06.273753  300655 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.786516131s)
	I1216 05:06:06.273700  300655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.789251044s)
	I1216 05:06:06.273790  300655 api_server.go:72] duration metric: took 1.973437927s to wait for apiserver process to appear ...
	I1216 05:06:06.273799  300655 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:06:06.273823  300655 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:06:06.273858  300655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.761721958s)
	I1216 05:06:06.274053  300655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.616316488s)
	I1216 05:06:06.275959  300655 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-418580 addons enable metrics-server
	
	I1216 05:06:06.284934  300655 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:06:06.284961  300655 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:06:06.292907  300655 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 05:06:04.315664  302822 provision.go:177] copyRemoteCerts
	I1216 05:06:04.315804  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:06:04.315880  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:04.346498  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:04.460495  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:06:04.484377  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 05:06:04.508621  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:06:04.539581  302822 provision.go:87] duration metric: took 570.151931ms to configureAuth
	I1216 05:06:04.539615  302822 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:06:04.539817  302822 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:04.539961  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:04.571893  302822 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:04.572228  302822 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1216 05:06:04.572259  302822 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:06:04.909510  302822 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:06:04.909538  302822 machine.go:97] duration metric: took 1.436593047s to provisionDockerMachine
	I1216 05:06:04.909549  302822 client.go:176] duration metric: took 5.680791973s to LocalClient.Create
	I1216 05:06:04.909570  302822 start.go:167] duration metric: took 5.680856057s to libmachine.API.Create "auto-548714"
	I1216 05:06:04.909579  302822 start.go:293] postStartSetup for "auto-548714" (driver="docker")
	I1216 05:06:04.909591  302822 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:06:04.909658  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:06:04.909710  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:04.932393  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.042745  302822 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:06:05.047921  302822 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:06:05.047957  302822 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:06:05.047970  302822 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:06:05.048043  302822 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:06:05.048354  302822 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:06:05.048506  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:06:05.061351  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:05.086237  302822 start.go:296] duration metric: took 176.644074ms for postStartSetup
	I1216 05:06:05.086654  302822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-548714
	I1216 05:06:05.106602  302822 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/config.json ...
	I1216 05:06:05.106810  302822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:06:05.106848  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:05.126683  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.225429  302822 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:06:05.236480  302822 start.go:128] duration metric: took 6.014018982s to createHost
	I1216 05:06:05.236511  302822 start.go:83] releasing machines lock for "auto-548714", held for 6.014150762s
	I1216 05:06:05.236591  302822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-548714
	I1216 05:06:05.262791  302822 ssh_runner.go:195] Run: cat /version.json
	I1216 05:06:05.262849  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:05.262875  302822 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:06:05.262963  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:05.284120  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.284736  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.378270  302822 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:05.449839  302822 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:06:05.506686  302822 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:06:05.512233  302822 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:06:05.512307  302822 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:06:05.552181  302822 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:06:05.552206  302822 start.go:496] detecting cgroup driver to use...
	I1216 05:06:05.552247  302822 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:06:05.552296  302822 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:06:05.588272  302822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:06:05.611172  302822 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:06:05.611245  302822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:06:05.638614  302822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:06:05.671577  302822 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:06:05.795360  302822 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:06:05.912744  302822 docker.go:234] disabling docker service ...
	I1216 05:06:05.912812  302822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:06:05.938530  302822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:06:05.953224  302822 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:06:06.074679  302822 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:06:06.197549  302822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:06:06.211819  302822 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:06:06.227706  302822 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:06:06.227778  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.250010  302822 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:06:06.250104  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.260247  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.270795  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.282829  302822 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:06:06.293372  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.303138  302822 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.318658  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.328347  302822 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:06:06.344493  302822 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:06:06.353030  302822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:06.446142  302822 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:06:06.605145  302822 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:06:06.605222  302822 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:06:06.609928  302822 start.go:564] Will wait 60s for crictl version
	I1216 05:06:06.609988  302822 ssh_runner.go:195] Run: which crictl
	I1216 05:06:06.614476  302822 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:06:06.643951  302822 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:06:06.644059  302822 ssh_runner.go:195] Run: crio --version
	I1216 05:06:06.682104  302822 ssh_runner.go:195] Run: crio --version
	I1216 05:06:06.721595  302822 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:06:06.294044  300655 addons.go:530] duration metric: took 1.993462564s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:06:06.774754  300655 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:06:06.780148  300655 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:06:06.781291  300655 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:06:06.781325  300655 api_server.go:131] duration metric: took 507.51835ms to wait for apiserver health ...
	I1216 05:06:06.781338  300655 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:06:06.786681  300655 system_pods.go:59] 8 kube-system pods found
	I1216 05:06:06.786717  300655 system_pods.go:61] "coredns-7d764666f9-67tdz" [2e7ab1b2-db44-41e7-9601-fe1fb0f149b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:06:06.786740  300655 system_pods.go:61] "etcd-newest-cni-418580" [5345f203-8998-46d2-af9b-9f60ee31c9b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:06:06.786752  300655 system_pods.go:61] "kindnet-btffs" [54b00649-4160-4dab-bce1-83080b927bde] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:06:06.786762  300655 system_pods.go:61] "kube-apiserver-newest-cni-418580" [922b709d-f979-411d-87d3-1d6e2085c405] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:06:06.786770  300655 system_pods.go:61] "kube-controller-manager-newest-cni-418580" [41246201-9b24-43b9-a291-b5d087085e7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:06:06.786783  300655 system_pods.go:61] "kube-proxy-ls9s9" [dbf8d374-0c38-4c64-ac2c-9a71db8b4223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:06:06.786792  300655 system_pods.go:61] "kube-scheduler-newest-cni-418580" [7a507411-b5e7-4bc1-8740-e07e3b476f8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:06:06.786809  300655 system_pods.go:61] "storage-provisioner" [fa6b12e1-67b9-40bb-93c3-42a32f0d9ed4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:06:06.786822  300655 system_pods.go:74] duration metric: took 5.475398ms to wait for pod list to return data ...
	I1216 05:06:06.786833  300655 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:06:06.789795  300655 default_sa.go:45] found service account: "default"
	I1216 05:06:06.789823  300655 default_sa.go:55] duration metric: took 2.983136ms for default service account to be created ...
	I1216 05:06:06.789838  300655 kubeadm.go:587] duration metric: took 2.489483599s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 05:06:06.789857  300655 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:06:06.792779  300655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:06:06.792805  300655 node_conditions.go:123] node cpu capacity is 8
	I1216 05:06:06.792826  300655 node_conditions.go:105] duration metric: took 2.954454ms to run NodePressure ...
	I1216 05:06:06.792838  300655 start.go:242] waiting for startup goroutines ...
	I1216 05:06:06.792855  300655 start.go:247] waiting for cluster config update ...
	I1216 05:06:06.792869  300655 start.go:256] writing updated cluster config ...
	I1216 05:06:06.793154  300655 ssh_runner.go:195] Run: rm -f paused
	I1216 05:06:06.863124  300655 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:06:06.868474  300655 out.go:179] * Done! kubectl is now configured to use "newest-cni-418580" cluster and "default" namespace by default
	I1216 05:06:06.722670  302822 cli_runner.go:164] Run: docker network inspect auto-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:06.744979  302822 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 05:06:06.749933  302822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:06.761754  302822 kubeadm.go:884] updating cluster {Name:auto-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:06:06.761861  302822 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:06.761911  302822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:06.806300  302822 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:06.806324  302822 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:06:06.806375  302822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:06.843009  302822 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:06.843071  302822 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:06:06.843083  302822 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1216 05:06:06.843204  302822 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-548714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:06:06.843290  302822 ssh_runner.go:195] Run: crio config
	I1216 05:06:06.904402  302822 cni.go:84] Creating CNI manager for ""
	I1216 05:06:06.904434  302822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:06:06.904454  302822 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:06:06.904490  302822 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-548714 NodeName:auto-548714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:06:06.904666  302822 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-548714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:06:06.904744  302822 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:06:06.915026  302822 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:06:06.915100  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:06:06.923718  302822 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 05:06:06.939928  302822 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:06:06.958767  302822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 05:06:06.973822  302822 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:06:06.978790  302822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:06.992267  302822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:07.091720  302822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:07.115163  302822 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714 for IP: 192.168.103.2
	I1216 05:06:07.115191  302822 certs.go:195] generating shared ca certs ...
	I1216 05:06:07.115211  302822 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.115388  302822 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:06:07.115458  302822 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:06:07.115473  302822 certs.go:257] generating profile certs ...
	I1216 05:06:07.115550  302822 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.key
	I1216 05:06:07.115566  302822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.crt with IP's: []
	I1216 05:06:07.211064  302822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.crt ...
	I1216 05:06:07.211098  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.crt: {Name:mkdf5bb7daa6198e1f584edc92121727b7fa93d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.211311  302822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.key ...
	I1216 05:06:07.211327  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.key: {Name:mkf9146d797d6e8eb77041efe4608512359f6a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.211457  302822 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a
	I1216 05:06:07.211474  302822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 05:06:07.278754  302822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a ...
	I1216 05:06:07.278780  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a: {Name:mk172aefc28213870531919a7e7db7c41a470622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.278926  302822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a ...
	I1216 05:06:07.278935  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a: {Name:mka88400e8e81aba77dc2366d8c8a72c30f0f12a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.279031  302822 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt
	I1216 05:06:07.279116  302822 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key
	I1216 05:06:07.279169  302822 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key
	I1216 05:06:07.279181  302822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt with IP's: []
	I1216 05:06:07.408248  302822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt ...
	I1216 05:06:07.408271  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt: {Name:mke46e8b927f72f1edcff928726bfd8d96db8bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.408430  302822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key ...
	I1216 05:06:07.408446  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key: {Name:mk46da144b13b1cc1808c00e2cc5c86a6f277b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.408664  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:06:07.408722  302822 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:06:07.408736  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:06:07.408769  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:06:07.408807  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:06:07.408844  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:06:07.408900  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:07.409525  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:06:07.433096  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:06:07.451497  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:06:07.469124  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:06:07.487238  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1216 05:06:07.507672  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:06:07.528689  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:06:07.550152  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:06:07.572679  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:06:07.598147  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:06:07.622104  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:06:07.647876  302822 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:06:07.665960  302822 ssh_runner.go:195] Run: openssl version
	I1216 05:06:07.674631  302822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.689204  302822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:06:07.699498  302822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.704553  302822 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.704617  302822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.752605  302822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:06:07.763352  302822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:06:07.770849  302822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.778836  302822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:06:07.787798  302822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.791890  302822 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.791950  302822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.830402  302822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:06:07.841363  302822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:06:07.850249  302822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.859225  302822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:06:07.867268  302822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.871515  302822 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.871581  302822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.913944  302822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:07.922299  302822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:07.930119  302822 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:06:07.934805  302822 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:06:07.934875  302822 kubeadm.go:401] StartCluster: {Name:auto-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:07.934966  302822 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:06:07.935037  302822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:06:07.966344  302822 cri.go:89] found id: ""
	I1216 05:06:07.966423  302822 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:06:07.975150  302822 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:07.983585  302822 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:07.983648  302822 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:07.993121  302822 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:07.993152  302822 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:07.993206  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:06:08.002508  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:08.002580  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:08.011710  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:06:08.021353  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:08.021481  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:08.030473  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:06:08.041454  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:08.041520  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:08.052372  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:06:08.060865  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:08.060923  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:08.069590  302822 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:08.115674  302822 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 05:06:08.116347  302822 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:06:08.140643  302822 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:06:08.140726  302822 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:06:08.140776  302822 kubeadm.go:319] OS: Linux
	I1216 05:06:08.140823  302822 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:06:08.140893  302822 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:06:08.140956  302822 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:06:08.141087  302822 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:06:08.141178  302822 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:06:08.141244  302822 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:06:08.141306  302822 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:06:08.141372  302822 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:06:08.208631  302822 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:06:08.208782  302822 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:06:08.208914  302822 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:06:08.216591  302822 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 05:06:03.824400  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:06:06.321413  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:06:08.323256  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	I1216 05:06:08.221856  302822 out.go:252]   - Generating certificates and keys ...
	I1216 05:06:08.221958  302822 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:06:08.222083  302822 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:06:08.413682  302822 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:06:05.739769  306433 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:06:05.740059  306433 start.go:159] libmachine.API.Create for "kindnet-548714" (driver="docker")
	I1216 05:06:05.740095  306433 client.go:173] LocalClient.Create starting
	I1216 05:06:05.740196  306433 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:06:05.740236  306433 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:05.740265  306433 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:05.740351  306433 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:06:05.740381  306433 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:05.740399  306433 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:05.740780  306433 cli_runner.go:164] Run: docker network inspect kindnet-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:06:05.764651  306433 cli_runner.go:211] docker network inspect kindnet-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:06:05.764732  306433 network_create.go:284] running [docker network inspect kindnet-548714] to gather additional debugging logs...
	I1216 05:06:05.764749  306433 cli_runner.go:164] Run: docker network inspect kindnet-548714
	W1216 05:06:05.784696  306433 cli_runner.go:211] docker network inspect kindnet-548714 returned with exit code 1
	I1216 05:06:05.784735  306433 network_create.go:287] error running [docker network inspect kindnet-548714]: docker network inspect kindnet-548714: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-548714 not found
	I1216 05:06:05.784750  306433 network_create.go:289] output of [docker network inspect kindnet-548714]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-548714 not found
	
	** /stderr **
	I1216 05:06:05.784866  306433 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:05.804877  306433 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:06:05.805820  306433 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:06:05.806648  306433 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:06:05.807270  306433 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4cd001204df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:4c:f5:dc:0a:90} reservation:<nil>}
	I1216 05:06:05.808046  306433 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-401a0a230295 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:65:83:ea:58:b6} reservation:<nil>}
	I1216 05:06:05.809076  306433 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee1a80}
	I1216 05:06:05.809113  306433 network_create.go:124] attempt to create docker network kindnet-548714 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 05:06:05.809176  306433 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-548714 kindnet-548714
	I1216 05:06:05.874923  306433 network_create.go:108] docker network kindnet-548714 192.168.94.0/24 created
	I1216 05:06:05.874982  306433 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-548714" container
	I1216 05:06:05.875085  306433 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:06:05.899586  306433 cli_runner.go:164] Run: docker volume create kindnet-548714 --label name.minikube.sigs.k8s.io=kindnet-548714 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:06:05.923117  306433 oci.go:103] Successfully created a docker volume kindnet-548714
	I1216 05:06:05.923206  306433 cli_runner.go:164] Run: docker run --rm --name kindnet-548714-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-548714 --entrypoint /usr/bin/test -v kindnet-548714:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:06:06.385785  306433 oci.go:107] Successfully prepared a docker volume kindnet-548714
	I1216 05:06:06.385862  306433 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:06.385876  306433 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:06:06.385968  306433 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-548714:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.778794363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.781576816Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=192c8eb6-20bb-4448-87db-2a19f5cc99ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.782334683Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7696cbeb-0279-4b96-8975-787be1c651d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.783538697Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.784133199Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.784316539Z" level=info msg="Ran pod sandbox 3ea7e3bb303afa845350ac95e858300077f91a711fea31ec98366930ac4d2c76 with infra container: kube-system/kube-proxy-ls9s9/POD" id=192c8eb6-20bb-4448-87db-2a19f5cc99ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.785088067Z" level=info msg="Ran pod sandbox 4bb728ede5a1d91437f474aec5e35a9285ee5557c20f807439ffe60762f00cae with infra container: kube-system/kindnet-btffs/POD" id=7696cbeb-0279-4b96-8975-787be1c651d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.785679557Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b9ed9235-e897-4dcb-b02b-87d7e94ffb12 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.786804692Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3aa22e6f-dc18-4765-b736-27a7ea63721e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.787554943Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=649fa012-7d46-4eb1-b421-74e95c41cac3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.788238653Z" level=info msg="Creating container: kube-system/kube-proxy-ls9s9/kube-proxy" id=f3045519-073c-4786-9ac7-5eee57864761 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.788359679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.788508361Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=27e4cb45-3c96-4586-8b60-49d562bcad7e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.789850971Z" level=info msg="Creating container: kube-system/kindnet-btffs/kindnet-cni" id=09dcbefc-09d9-42b3-a327-20678e41b802 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.789991076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.794177368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.796418513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.796512881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.796944532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.831366126Z" level=info msg="Created container c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696: kube-system/kindnet-btffs/kindnet-cni" id=09dcbefc-09d9-42b3-a327-20678e41b802 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.832164273Z" level=info msg="Starting container: c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696" id=6117e89c-95b6-4044-b995-b25da9180aa5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.833681852Z" level=info msg="Created container 861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c: kube-system/kube-proxy-ls9s9/kube-proxy" id=f3045519-073c-4786-9ac7-5eee57864761 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.834817602Z" level=info msg="Starting container: 861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c" id=06bde844-e929-4ec3-9d08-6f22d20c1158 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.834861564Z" level=info msg="Started container" PID=1042 containerID=c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696 description=kube-system/kindnet-btffs/kindnet-cni id=6117e89c-95b6-4044-b995-b25da9180aa5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bb728ede5a1d91437f474aec5e35a9285ee5557c20f807439ffe60762f00cae
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.838512029Z" level=info msg="Started container" PID=1043 containerID=861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c description=kube-system/kube-proxy-ls9s9/kube-proxy id=06bde844-e929-4ec3-9d08-6f22d20c1158 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ea7e3bb303afa845350ac95e858300077f91a711fea31ec98366930ac4d2c76
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c833633bfd1f8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   4bb728ede5a1d       kindnet-btffs                               kube-system
	861164120a425       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   3ea7e3bb303af       kube-proxy-ls9s9                            kube-system
	792381e459509       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   3d7172889c974       kube-apiserver-newest-cni-418580            kube-system
	5aa99fc300b28       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   50fd46bf653cf       kube-controller-manager-newest-cni-418580   kube-system
	6c35f39d09b5d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   46ebafdc58152       kube-scheduler-newest-cni-418580            kube-system
	31d4485dc7982       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   9838b4a4f1a22       etcd-newest-cni-418580                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-418580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-418580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=newest-cni-418580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_05_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:05:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-418580
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:06:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-418580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                964ef0bc-6b69-490b-a586-8c76d18cc2c2
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-418580                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-btffs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-418580             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-418580    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-ls9s9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-418580             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  24s   node-controller  Node newest-cni-418580 event: Registered Node newest-cni-418580 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-418580 event: Registered Node newest-cni-418580 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [31d4485dc7982aced65031b666c86beb9beea7f0add39a846111c271e1b90ccf] <==
	{"level":"warn","ts":"2025-12-16T05:06:09.616303Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.154962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-16T05:06:09.616460Z","caller":"traceutil/trace.go:172","msg":"trace[612266778] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:429; }","duration":"131.316604ms","start":"2025-12-16T05:06:09.485128Z","end":"2025-12-16T05:06:09.616445Z","steps":["trace[612266778] 'agreement among raft nodes before linearized reading'  (duration: 97.551586ms)","trace[612266778] 'range keys from in-memory index tree'  (duration: 33.495892ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:09.616501Z","caller":"traceutil/trace.go:172","msg":"trace[900879562] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"140.262437ms","start":"2025-12-16T05:06:09.476213Z","end":"2025-12-16T05:06:09.616476Z","steps":["trace[900879562] 'process raft request'  (duration: 106.489914ms)","trace[900879562] 'compare'  (duration: 33.529302ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:09.616631Z","caller":"traceutil/trace.go:172","msg":"trace[218378059] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"140.078753ms","start":"2025-12-16T05:06:09.476540Z","end":"2025-12-16T05:06:09.616619Z","steps":["trace[218378059] 'process raft request'  (duration: 139.843943ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616659Z","caller":"traceutil/trace.go:172","msg":"trace[1801022997] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"139.210678ms","start":"2025-12-16T05:06:09.477439Z","end":"2025-12-16T05:06:09.616650Z","steps":["trace[1801022997] 'process raft request'  (duration: 139.035017ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:09.616685Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.604516ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:06:09.616720Z","caller":"traceutil/trace.go:172","msg":"trace[2006365046] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:0; response_revision:435; }","duration":"112.647075ms","start":"2025-12-16T05:06:09.504064Z","end":"2025-12-16T05:06:09.616711Z","steps":["trace[2006365046] 'agreement among raft nodes before linearized reading'  (duration: 112.574977ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616810Z","caller":"traceutil/trace.go:172","msg":"trace[1090414356] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"139.102956ms","start":"2025-12-16T05:06:09.477700Z","end":"2025-12-16T05:06:09.616803Z","steps":["trace[1090414356] 'process raft request'  (duration: 138.843105ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616870Z","caller":"traceutil/trace.go:172","msg":"trace[2075775532] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"139.140012ms","start":"2025-12-16T05:06:09.477703Z","end":"2025-12-16T05:06:09.616843Z","steps":["trace[2075775532] 'process raft request'  (duration: 138.89728ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616867Z","caller":"traceutil/trace.go:172","msg":"trace[944890944] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"139.918214ms","start":"2025-12-16T05:06:09.476937Z","end":"2025-12-16T05:06:09.616856Z","steps":["trace[944890944] 'process raft request'  (duration: 139.493103ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.844890Z","caller":"traceutil/trace.go:172","msg":"trace[1949662580] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"220.448708ms","start":"2025-12-16T05:06:09.624417Z","end":"2025-12-16T05:06:09.844866Z","steps":["trace[1949662580] 'process raft request'  (duration: 125.284953ms)","trace[1949662580] 'compare'  (duration: 94.925745ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:09.845823Z","caller":"traceutil/trace.go:172","msg":"trace[1415415350] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"217.884008ms","start":"2025-12-16T05:06:09.627926Z","end":"2025-12-16T05:06:09.845810Z","steps":["trace[1415415350] 'process raft request'  (duration: 217.80308ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.845850Z","caller":"traceutil/trace.go:172","msg":"trace[1869093682] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"218.768234ms","start":"2025-12-16T05:06:09.627067Z","end":"2025-12-16T05:06:09.845835Z","steps":["trace[1869093682] 'process raft request'  (duration: 218.632082ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.845978Z","caller":"traceutil/trace.go:172","msg":"trace[1585753833] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"218.987993ms","start":"2025-12-16T05:06:09.626982Z","end":"2025-12-16T05:06:09.845970Z","steps":["trace[1585753833] 'process raft request'  (duration: 218.65487ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.027906Z","caller":"traceutil/trace.go:172","msg":"trace[185913012] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"173.567334ms","start":"2025-12-16T05:06:09.854314Z","end":"2025-12-16T05:06:10.027881Z","steps":["trace[185913012] 'process raft request'  (duration: 173.47287ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.027970Z","caller":"traceutil/trace.go:172","msg":"trace[1896521222] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"157.770996ms","start":"2025-12-16T05:06:09.870184Z","end":"2025-12-16T05:06:10.027955Z","steps":["trace[1896521222] 'process raft request'  (duration: 157.741922ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.028143Z","caller":"traceutil/trace.go:172","msg":"trace[818602939] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"174.335241ms","start":"2025-12-16T05:06:09.853793Z","end":"2025-12-16T05:06:10.028128Z","steps":["trace[818602939] 'process raft request'  (duration: 152.003962ms)","trace[818602939] 'compare'  (duration: 21.878351ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:10.028218Z","caller":"traceutil/trace.go:172","msg":"trace[875561013] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"170.895044ms","start":"2025-12-16T05:06:09.857309Z","end":"2025-12-16T05:06:10.028204Z","steps":["trace[875561013] 'process raft request'  (duration: 170.552595ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.028269Z","caller":"traceutil/trace.go:172","msg":"trace[1541164148] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"170.89657ms","start":"2025-12-16T05:06:09.857361Z","end":"2025-12-16T05:06:10.028257Z","steps":["trace[1541164148] 'process raft request'  (duration: 170.530852ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.028236Z","caller":"traceutil/trace.go:172","msg":"trace[179833025] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"172.444289ms","start":"2025-12-16T05:06:09.855771Z","end":"2025-12-16T05:06:10.028215Z","steps":["trace[179833025] 'process raft request'  (duration: 172.048027ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:10.274808Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.255836ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597756146472515 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-b84665fb8.188199bde95edade\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-b84665fb8.188199bde95edade\" value_size:676 lease:499225719291696326 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-16T05:06:10.275131Z","caller":"traceutil/trace.go:172","msg":"trace[1828505244] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"243.622892ms","start":"2025-12-16T05:06:10.031486Z","end":"2025-12-16T05:06:10.275109Z","steps":["trace[1828505244] 'process raft request'  (duration: 121.963521ms)","trace[1828505244] 'compare'  (duration: 120.959649ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:10.275203Z","caller":"traceutil/trace.go:172","msg":"trace[1476405070] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"241.134484ms","start":"2025-12-16T05:06:10.034055Z","end":"2025-12-16T05:06:10.275189Z","steps":["trace[1476405070] 'process raft request'  (duration: 241.066903ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.275237Z","caller":"traceutil/trace.go:172","msg":"trace[1608356931] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"243.072448ms","start":"2025-12-16T05:06:10.032159Z","end":"2025-12-16T05:06:10.275231Z","steps":["trace[1608356931] 'process raft request'  (duration: 242.780955ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.275315Z","caller":"traceutil/trace.go:172","msg":"trace[1014788976] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"241.320568ms","start":"2025-12-16T05:06:10.033980Z","end":"2025-12-16T05:06:10.275301Z","steps":["trace[1014788976] 'process raft request'  (duration: 241.071385ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:06:11 up 48 min,  0 user,  load average: 4.79, 3.21, 2.13
	Linux newest-cni-418580 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696] <==
	I1216 05:06:07.133488       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:06:07.133781       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 05:06:07.133922       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:06:07.133943       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:06:07.133974       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:06:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:06:07.336234       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:06:07.336292       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:06:07.336329       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:06:07.337004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:06:07.733427       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:06:07.733531       1 metrics.go:72] Registering metrics
	I1216 05:06:07.735228       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [792381e45950963b0ae8db2f5ad977f211daaf0810b7a93dd2d909444e1821c4] <==
	I1216 05:06:05.645773       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 05:06:05.645848       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 05:06:05.660201       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1216 05:06:05.661336       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:06:05.663481       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:06:05.664215       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:05.664240       1 policy_source.go:248] refreshing policies
	I1216 05:06:05.681511       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 05:06:05.681632       1 aggregator.go:187] initial CRD sync complete...
	I1216 05:06:05.681675       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 05:06:05.681704       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:06:05.681735       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:06:05.683149       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:06:05.684554       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:06:06.003040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:06:06.037462       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:06:06.058381       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:06:06.065769       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:06:06.071731       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:06:06.124365       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.65.71"}
	I1216 05:06:06.140507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.148.69"}
	I1216 05:06:06.538844       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 05:06:09.142141       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:06:09.269435       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:06:09.477069       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5aa99fc300b28206b76480bb951b5c8dfa804ac51633f6fe3b8e67aa1be01b31] <==
	I1216 05:06:08.804772       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1216 05:06:08.797834       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797823       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797843       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797849       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797861       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797873       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797882       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797912       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797916       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797923       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797928       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797957       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797802       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.806473       1 range_allocator.go:177] "Sending events to api server"
	I1216 05:06:08.806498       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1216 05:06:08.806504       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:08.806511       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797899       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.809610       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.814794       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:08.897545       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.897590       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 05:06:08.897598       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 05:06:08.915232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c] <==
	I1216 05:06:06.881650       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:06:06.949513       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:07.050082       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:07.050117       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 05:06:07.050258       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:06:07.070886       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:06:07.070940       1 server_linux.go:136] "Using iptables Proxier"
	I1216 05:06:07.076390       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:06:07.076833       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 05:06:07.076861       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:06:07.081600       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:06:07.081713       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:06:07.081802       1 config.go:200] "Starting service config controller"
	I1216 05:06:07.081872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:06:07.078676       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:06:07.081953       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:06:07.081924       1 config.go:309] "Starting node config controller"
	I1216 05:06:07.082069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:06:07.082096       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:06:07.181972       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:06:07.181985       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:06:07.182079       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6c35f39d09b5d69eece5daa0f5057bb2198649a6a6ddb2a863f25c31e0a13ce2] <==
	I1216 05:06:04.395346       1 serving.go:386] Generated self-signed cert in-memory
	W1216 05:06:05.587855       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:06:05.587890       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:06:05.587903       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:06:05.587924       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:06:05.628296       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1216 05:06:05.628414       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:06:05.631532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:06:05.631620       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:05.634346       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:06:05.634432       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:06:05.732637       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.686843     663 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.693916     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-418580\" already exists" pod="kube-system/kube-apiserver-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.693958     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.704625     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-418580\" already exists" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.706675     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.723455     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-418580\" already exists" pod="kube-system/kube-scheduler-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.723723     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.734039     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-418580\" already exists" pod="kube-system/etcd-newest-cni-418580"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.375722     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.382487     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-418580\" already exists" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.382589     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-418580" containerName="kube-controller-manager"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.467188     663 apiserver.go:52] "Watching apiserver"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.477057     663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.480998     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-xtables-lock\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481460     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-cni-cfg\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481498     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-xtables-lock\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481542     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-lib-modules\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481589     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-lib-modules\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.540463     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-418580" containerName="etcd"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.540770     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-418580" containerName="kube-controller-manager"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.540840     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-418580" containerName="kube-scheduler"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.541077     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-418580" containerName="kube-apiserver"
	Dec 16 05:06:07 newest-cni-418580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:06:08 newest-cni-418580 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:06:08 newest-cni-418580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418580 -n newest-cni-418580
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418580 -n newest-cni-418580: exit status 2 (329.822305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-418580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m: exit status 1 (61.765055ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-67tdz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-zvx2d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fhc6m" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-418580
helpers_test.go:244: (dbg) docker inspect newest-cni-418580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1",
	        "Created": "2025-12-16T05:05:28.422423027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:05:55.147534054Z",
	            "FinishedAt": "2025-12-16T05:05:54.231520148Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/hosts",
	        "LogPath": "/var/lib/docker/containers/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1/7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1-json.log",
	        "Name": "/newest-cni-418580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-418580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-418580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ec1bd166949555d062d97b07abd21e6a6fb11e82c9d34a9a811883d328621e1",
	                "LowerDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7056c710110fec95b10c612bd1973d19224162761edd47e7b68bf4e92e0973d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-418580",
	                "Source": "/var/lib/docker/volumes/newest-cni-418580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-418580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-418580",
	                "name.minikube.sigs.k8s.io": "newest-cni-418580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "721b3ecee6376869daa6214cdeb5eb180664e052bba1252dee9831f03f0f8d1a",
	            "SandboxKey": "/var/run/docker/netns/721b3ecee637",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-418580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "401a0a2302958aa43185772bb4bb58f2878244e830bb4ed218c958e2da9161d5",
	                    "EndpointID": "330f1cdf085b1d66245d40aeaf59d56e66f1c595ef2b7727f56d55e4e92dbf92",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "3a:9c:46:e2:07:44",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-418580",
	                        "7ec1bd166949"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580: exit status 2 (321.180183ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-418580 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-375252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-375252 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ image   │ old-k8s-version-545075 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p old-k8s-version-545075 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p newest-cni-418580 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-418580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:06 UTC │
	│ image   │ embed-certs-127599 image list --format=json                                                                                                                                                                                                          │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p embed-certs-127599 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p auto-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-548714                  │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ start   │ -p kindnet-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-548714               │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ image   │ newest-cni-418580 image list --format=json                                                                                                                                                                                                           │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ pause   │ -p newest-cni-418580 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:06:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:06:05.454374  306433 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:06:05.454495  306433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:05.454508  306433 out.go:374] Setting ErrFile to fd 2...
	I1216 05:06:05.454517  306433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:05.454795  306433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:06:05.455557  306433 out.go:368] Setting JSON to false
	I1216 05:06:05.460960  306433 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2916,"bootTime":1765858649,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:06:05.461086  306433 start.go:143] virtualization: kvm guest
	I1216 05:06:05.464428  306433 out.go:179] * [kindnet-548714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:06:05.465939  306433 notify.go:221] Checking for updates...
	I1216 05:06:05.466006  306433 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:06:05.469753  306433 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:06:05.471043  306433 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:06:05.472488  306433 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:06:05.473679  306433 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:06:05.475041  306433 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:06:05.476715  306433 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:05.476862  306433 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:05.477010  306433 config.go:182] Loaded profile config "newest-cni-418580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:06:05.477132  306433 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:06:05.508594  306433 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:06:05.508781  306433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:05.597858  306433 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:05.577806462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:05.599131  306433 docker.go:319] overlay module found
	I1216 05:06:05.600990  306433 out.go:179] * Using the docker driver based on user configuration
	I1216 05:06:05.602241  306433 start.go:309] selected driver: docker
	I1216 05:06:05.602257  306433 start.go:927] validating driver "docker" against <nil>
	I1216 05:06:05.602272  306433 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:06:05.603084  306433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:05.693222  306433 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:05.679115537 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:05.693447  306433 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:06:05.693746  306433 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:06:05.694993  306433 out.go:179] * Using Docker driver with root privileges
	I1216 05:06:05.696098  306433 cni.go:84] Creating CNI manager for "kindnet"
	I1216 05:06:05.696122  306433 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:06:05.696215  306433 start.go:353] cluster config:
	{Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:05.697483  306433 out.go:179] * Starting "kindnet-548714" primary control-plane node in "kindnet-548714" cluster
	I1216 05:06:05.698792  306433 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:06:05.700535  306433 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:06:05.702191  306433 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:05.702392  306433 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:06:05.702405  306433 cache.go:65] Caching tarball of preloaded images
	I1216 05:06:05.702503  306433 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:06:05.702513  306433 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:06:05.702628  306433 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/config.json ...
	I1216 05:06:05.702648  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/config.json: {Name:mk4bfd6285be932047085ce22fb0ce76bf268c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:05.702275  306433 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:06:05.737667  306433 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:06:05.737691  306433 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:06:05.737711  306433 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:06:05.737750  306433 start.go:360] acquireMachinesLock for kindnet-548714: {Name:mke5003da2d1236e174e843cc78b8a9282861f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:06:05.737861  306433 start.go:364] duration metric: took 90.645µs to acquireMachinesLock for "kindnet-548714"
	I1216 05:06:05.737890  306433 start.go:93] Provisioning new machine with config: &{Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:06:05.737972  306433 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:06:06.273753  300655 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.786516131s)
	I1216 05:06:06.273700  300655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.789251044s)
	I1216 05:06:06.273790  300655 api_server.go:72] duration metric: took 1.973437927s to wait for apiserver process to appear ...
	I1216 05:06:06.273799  300655 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:06:06.273823  300655 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:06:06.273858  300655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.761721958s)
	I1216 05:06:06.274053  300655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.616316488s)
	I1216 05:06:06.275959  300655 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-418580 addons enable metrics-server
	
	I1216 05:06:06.284934  300655 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:06:06.284961  300655 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:06:06.292907  300655 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1216 05:06:04.315664  302822 provision.go:177] copyRemoteCerts
	I1216 05:06:04.315804  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:06:04.315880  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:04.346498  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:04.460495  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:06:04.484377  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 05:06:04.508621  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:06:04.539581  302822 provision.go:87] duration metric: took 570.151931ms to configureAuth
	I1216 05:06:04.539615  302822 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:06:04.539817  302822 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:04.539961  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:04.571893  302822 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:04.572228  302822 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1216 05:06:04.572259  302822 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:06:04.909510  302822 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:06:04.909538  302822 machine.go:97] duration metric: took 1.436593047s to provisionDockerMachine
	I1216 05:06:04.909549  302822 client.go:176] duration metric: took 5.680791973s to LocalClient.Create
	I1216 05:06:04.909570  302822 start.go:167] duration metric: took 5.680856057s to libmachine.API.Create "auto-548714"
	I1216 05:06:04.909579  302822 start.go:293] postStartSetup for "auto-548714" (driver="docker")
	I1216 05:06:04.909591  302822 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:06:04.909658  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:06:04.909710  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:04.932393  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.042745  302822 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:06:05.047921  302822 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:06:05.047957  302822 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:06:05.047970  302822 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:06:05.048043  302822 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:06:05.048354  302822 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:06:05.048506  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:06:05.061351  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:05.086237  302822 start.go:296] duration metric: took 176.644074ms for postStartSetup
	I1216 05:06:05.086654  302822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-548714
	I1216 05:06:05.106602  302822 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/config.json ...
	I1216 05:06:05.106810  302822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:06:05.106848  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:05.126683  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.225429  302822 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:06:05.236480  302822 start.go:128] duration metric: took 6.014018982s to createHost
	I1216 05:06:05.236511  302822 start.go:83] releasing machines lock for "auto-548714", held for 6.014150762s
	I1216 05:06:05.236591  302822 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-548714
	I1216 05:06:05.262791  302822 ssh_runner.go:195] Run: cat /version.json
	I1216 05:06:05.262849  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:05.262875  302822 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:06:05.262963  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:05.284120  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.284736  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:05.378270  302822 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:05.449839  302822 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:06:05.506686  302822 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:06:05.512233  302822 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:06:05.512307  302822 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:06:05.552181  302822 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:06:05.552206  302822 start.go:496] detecting cgroup driver to use...
	I1216 05:06:05.552247  302822 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:06:05.552296  302822 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:06:05.588272  302822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:06:05.611172  302822 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:06:05.611245  302822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:06:05.638614  302822 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:06:05.671577  302822 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:06:05.795360  302822 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:06:05.912744  302822 docker.go:234] disabling docker service ...
	I1216 05:06:05.912812  302822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:06:05.938530  302822 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:06:05.953224  302822 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:06:06.074679  302822 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:06:06.197549  302822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:06:06.211819  302822 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:06:06.227706  302822 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:06:06.227778  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.250010  302822 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:06:06.250104  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.260247  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.270795  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.282829  302822 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:06:06.293372  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.303138  302822 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.318658  302822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:06.328347  302822 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:06:06.344493  302822 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:06:06.353030  302822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:06.446142  302822 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:06:06.605145  302822 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:06:06.605222  302822 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:06:06.609928  302822 start.go:564] Will wait 60s for crictl version
	I1216 05:06:06.609988  302822 ssh_runner.go:195] Run: which crictl
	I1216 05:06:06.614476  302822 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:06:06.643951  302822 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:06:06.644059  302822 ssh_runner.go:195] Run: crio --version
	I1216 05:06:06.682104  302822 ssh_runner.go:195] Run: crio --version
	I1216 05:06:06.721595  302822 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:06:06.294044  300655 addons.go:530] duration metric: took 1.993462564s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1216 05:06:06.774754  300655 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 05:06:06.780148  300655 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 05:06:06.781291  300655 api_server.go:141] control plane version: v1.35.0-beta.0
	I1216 05:06:06.781325  300655 api_server.go:131] duration metric: took 507.51835ms to wait for apiserver health ...
	I1216 05:06:06.781338  300655 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:06:06.786681  300655 system_pods.go:59] 8 kube-system pods found
	I1216 05:06:06.786717  300655 system_pods.go:61] "coredns-7d764666f9-67tdz" [2e7ab1b2-db44-41e7-9601-fe1fb0f149b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:06:06.786740  300655 system_pods.go:61] "etcd-newest-cni-418580" [5345f203-8998-46d2-af9b-9f60ee31c9b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:06:06.786752  300655 system_pods.go:61] "kindnet-btffs" [54b00649-4160-4dab-bce1-83080b927bde] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1216 05:06:06.786762  300655 system_pods.go:61] "kube-apiserver-newest-cni-418580" [922b709d-f979-411d-87d3-1d6e2085c405] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:06:06.786770  300655 system_pods.go:61] "kube-controller-manager-newest-cni-418580" [41246201-9b24-43b9-a291-b5d087085e7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:06:06.786783  300655 system_pods.go:61] "kube-proxy-ls9s9" [dbf8d374-0c38-4c64-ac2c-9a71db8b4223] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 05:06:06.786792  300655 system_pods.go:61] "kube-scheduler-newest-cni-418580" [7a507411-b5e7-4bc1-8740-e07e3b476f8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:06:06.786809  300655 system_pods.go:61] "storage-provisioner" [fa6b12e1-67b9-40bb-93c3-42a32f0d9ed4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1216 05:06:06.786822  300655 system_pods.go:74] duration metric: took 5.475398ms to wait for pod list to return data ...
	I1216 05:06:06.786833  300655 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:06:06.789795  300655 default_sa.go:45] found service account: "default"
	I1216 05:06:06.789823  300655 default_sa.go:55] duration metric: took 2.983136ms for default service account to be created ...
	I1216 05:06:06.789838  300655 kubeadm.go:587] duration metric: took 2.489483599s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 05:06:06.789857  300655 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:06:06.792779  300655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 05:06:06.792805  300655 node_conditions.go:123] node cpu capacity is 8
	I1216 05:06:06.792826  300655 node_conditions.go:105] duration metric: took 2.954454ms to run NodePressure ...
	I1216 05:06:06.792838  300655 start.go:242] waiting for startup goroutines ...
	I1216 05:06:06.792855  300655 start.go:247] waiting for cluster config update ...
	I1216 05:06:06.792869  300655 start.go:256] writing updated cluster config ...
	I1216 05:06:06.793154  300655 ssh_runner.go:195] Run: rm -f paused
	I1216 05:06:06.863124  300655 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1216 05:06:06.868474  300655 out.go:179] * Done! kubectl is now configured to use "newest-cni-418580" cluster and "default" namespace by default
	I1216 05:06:06.722670  302822 cli_runner.go:164] Run: docker network inspect auto-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:06.744979  302822 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1216 05:06:06.749933  302822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:06.761754  302822 kubeadm.go:884] updating cluster {Name:auto-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:06:06.761861  302822 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:06.761911  302822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:06.806300  302822 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:06.806324  302822 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:06:06.806375  302822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:06.843009  302822 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:06.843071  302822 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:06:06.843083  302822 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1216 05:06:06.843204  302822 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-548714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:06:06.843290  302822 ssh_runner.go:195] Run: crio config
	I1216 05:06:06.904402  302822 cni.go:84] Creating CNI manager for ""
	I1216 05:06:06.904434  302822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:06:06.904454  302822 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:06:06.904490  302822 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-548714 NodeName:auto-548714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:06:06.904666  302822 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-548714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:06:06.904744  302822 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:06:06.915026  302822 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:06:06.915100  302822 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:06:06.923718  302822 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 05:06:06.939928  302822 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:06:06.958767  302822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 05:06:06.973822  302822 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:06:06.978790  302822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:06.992267  302822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:07.091720  302822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:07.115163  302822 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714 for IP: 192.168.103.2
	I1216 05:06:07.115191  302822 certs.go:195] generating shared ca certs ...
	I1216 05:06:07.115211  302822 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.115388  302822 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:06:07.115458  302822 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:06:07.115473  302822 certs.go:257] generating profile certs ...
	I1216 05:06:07.115550  302822 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.key
	I1216 05:06:07.115566  302822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.crt with IP's: []
	I1216 05:06:07.211064  302822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.crt ...
	I1216 05:06:07.211098  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.crt: {Name:mkdf5bb7daa6198e1f584edc92121727b7fa93d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.211311  302822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.key ...
	I1216 05:06:07.211327  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/client.key: {Name:mkf9146d797d6e8eb77041efe4608512359f6a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.211457  302822 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a
	I1216 05:06:07.211474  302822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 05:06:07.278754  302822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a ...
	I1216 05:06:07.278780  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a: {Name:mk172aefc28213870531919a7e7db7c41a470622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.278926  302822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a ...
	I1216 05:06:07.278935  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a: {Name:mka88400e8e81aba77dc2366d8c8a72c30f0f12a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.279031  302822 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt.60184c8a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt
	I1216 05:06:07.279116  302822 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key.60184c8a -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key
	I1216 05:06:07.279169  302822 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key
	I1216 05:06:07.279181  302822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt with IP's: []
	I1216 05:06:07.408248  302822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt ...
	I1216 05:06:07.408271  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt: {Name:mke46e8b927f72f1edcff928726bfd8d96db8bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.408430  302822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key ...
	I1216 05:06:07.408446  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key: {Name:mk46da144b13b1cc1808c00e2cc5c86a6f277b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:07.408664  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:06:07.408722  302822 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:06:07.408736  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:06:07.408769  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:06:07.408807  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:06:07.408844  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:06:07.408900  302822 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:07.409525  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:06:07.433096  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:06:07.451497  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:06:07.469124  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:06:07.487238  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1216 05:06:07.507672  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:06:07.528689  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:06:07.550152  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/auto-548714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:06:07.572679  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:06:07.598147  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:06:07.622104  302822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:06:07.647876  302822 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:06:07.665960  302822 ssh_runner.go:195] Run: openssl version
	I1216 05:06:07.674631  302822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.689204  302822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:06:07.699498  302822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.704553  302822 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.704617  302822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:07.752605  302822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:06:07.763352  302822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:06:07.770849  302822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.778836  302822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:06:07.787798  302822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.791890  302822 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.791950  302822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:06:07.830402  302822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:06:07.841363  302822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:06:07.850249  302822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.859225  302822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:06:07.867268  302822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.871515  302822 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.871581  302822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:06:07.913944  302822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:07.922299  302822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:07.930119  302822 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:06:07.934805  302822 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:06:07.934875  302822 kubeadm.go:401] StartCluster: {Name:auto-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:07.934966  302822 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:06:07.935037  302822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:06:07.966344  302822 cri.go:89] found id: ""
	I1216 05:06:07.966423  302822 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:06:07.975150  302822 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:07.983585  302822 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:07.983648  302822 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:07.993121  302822 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:07.993152  302822 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:07.993206  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:06:08.002508  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:08.002580  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:08.011710  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:06:08.021353  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:08.021481  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:08.030473  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:06:08.041454  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:08.041520  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:08.052372  302822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:06:08.060865  302822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:08.060923  302822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:08.069590  302822 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:08.115674  302822 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 05:06:08.116347  302822 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:06:08.140643  302822 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:06:08.140726  302822 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:06:08.140776  302822 kubeadm.go:319] OS: Linux
	I1216 05:06:08.140823  302822 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:06:08.140893  302822 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:06:08.140956  302822 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:06:08.141087  302822 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:06:08.141178  302822 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:06:08.141244  302822 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:06:08.141306  302822 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:06:08.141372  302822 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:06:08.208631  302822 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:06:08.208782  302822 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:06:08.208914  302822 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:06:08.216591  302822 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 05:06:03.824400  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:06:06.321413  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	W1216 05:06:08.323256  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	I1216 05:06:08.221856  302822 out.go:252]   - Generating certificates and keys ...
	I1216 05:06:08.221958  302822 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:06:08.222083  302822 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:06:08.413682  302822 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:06:05.739769  306433 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:06:05.740059  306433 start.go:159] libmachine.API.Create for "kindnet-548714" (driver="docker")
	I1216 05:06:05.740095  306433 client.go:173] LocalClient.Create starting
	I1216 05:06:05.740196  306433 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:06:05.740236  306433 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:05.740265  306433 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:05.740351  306433 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:06:05.740381  306433 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:05.740399  306433 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:05.740780  306433 cli_runner.go:164] Run: docker network inspect kindnet-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:06:05.764651  306433 cli_runner.go:211] docker network inspect kindnet-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:06:05.764732  306433 network_create.go:284] running [docker network inspect kindnet-548714] to gather additional debugging logs...
	I1216 05:06:05.764749  306433 cli_runner.go:164] Run: docker network inspect kindnet-548714
	W1216 05:06:05.784696  306433 cli_runner.go:211] docker network inspect kindnet-548714 returned with exit code 1
	I1216 05:06:05.784735  306433 network_create.go:287] error running [docker network inspect kindnet-548714]: docker network inspect kindnet-548714: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-548714 not found
	I1216 05:06:05.784750  306433 network_create.go:289] output of [docker network inspect kindnet-548714]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-548714 not found
	
	** /stderr **
	I1216 05:06:05.784866  306433 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:05.804877  306433 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:06:05.805820  306433 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:06:05.806648  306433 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:06:05.807270  306433 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4cd001204df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:4c:f5:dc:0a:90} reservation:<nil>}
	I1216 05:06:05.808046  306433 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-401a0a230295 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:62:65:83:ea:58:b6} reservation:<nil>}
	I1216 05:06:05.809076  306433 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee1a80}
	I1216 05:06:05.809113  306433 network_create.go:124] attempt to create docker network kindnet-548714 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 05:06:05.809176  306433 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-548714 kindnet-548714
	I1216 05:06:05.874923  306433 network_create.go:108] docker network kindnet-548714 192.168.94.0/24 created
	I1216 05:06:05.874982  306433 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-548714" container
	I1216 05:06:05.875085  306433 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:06:05.899586  306433 cli_runner.go:164] Run: docker volume create kindnet-548714 --label name.minikube.sigs.k8s.io=kindnet-548714 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:06:05.923117  306433 oci.go:103] Successfully created a docker volume kindnet-548714
	I1216 05:06:05.923206  306433 cli_runner.go:164] Run: docker run --rm --name kindnet-548714-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-548714 --entrypoint /usr/bin/test -v kindnet-548714:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:06:06.385785  306433 oci.go:107] Successfully prepared a docker volume kindnet-548714
	I1216 05:06:06.385862  306433 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:06.385876  306433 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:06:06.385968  306433 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-548714:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.778794363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.781576816Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=192c8eb6-20bb-4448-87db-2a19f5cc99ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.782334683Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7696cbeb-0279-4b96-8975-787be1c651d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.783538697Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.784133199Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.784316539Z" level=info msg="Ran pod sandbox 3ea7e3bb303afa845350ac95e858300077f91a711fea31ec98366930ac4d2c76 with infra container: kube-system/kube-proxy-ls9s9/POD" id=192c8eb6-20bb-4448-87db-2a19f5cc99ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.785088067Z" level=info msg="Ran pod sandbox 4bb728ede5a1d91437f474aec5e35a9285ee5557c20f807439ffe60762f00cae with infra container: kube-system/kindnet-btffs/POD" id=7696cbeb-0279-4b96-8975-787be1c651d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.785679557Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b9ed9235-e897-4dcb-b02b-87d7e94ffb12 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.786804692Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3aa22e6f-dc18-4765-b736-27a7ea63721e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.787554943Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=649fa012-7d46-4eb1-b421-74e95c41cac3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.788238653Z" level=info msg="Creating container: kube-system/kube-proxy-ls9s9/kube-proxy" id=f3045519-073c-4786-9ac7-5eee57864761 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.788359679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.788508361Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=27e4cb45-3c96-4586-8b60-49d562bcad7e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.789850971Z" level=info msg="Creating container: kube-system/kindnet-btffs/kindnet-cni" id=09dcbefc-09d9-42b3-a327-20678e41b802 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.789991076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.794177368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.796418513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.796512881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.796944532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.831366126Z" level=info msg="Created container c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696: kube-system/kindnet-btffs/kindnet-cni" id=09dcbefc-09d9-42b3-a327-20678e41b802 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.832164273Z" level=info msg="Starting container: c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696" id=6117e89c-95b6-4044-b995-b25da9180aa5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.833681852Z" level=info msg="Created container 861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c: kube-system/kube-proxy-ls9s9/kube-proxy" id=f3045519-073c-4786-9ac7-5eee57864761 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.834817602Z" level=info msg="Starting container: 861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c" id=06bde844-e929-4ec3-9d08-6f22d20c1158 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.834861564Z" level=info msg="Started container" PID=1042 containerID=c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696 description=kube-system/kindnet-btffs/kindnet-cni id=6117e89c-95b6-4044-b995-b25da9180aa5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bb728ede5a1d91437f474aec5e35a9285ee5557c20f807439ffe60762f00cae
	Dec 16 05:06:06 newest-cni-418580 crio[516]: time="2025-12-16T05:06:06.838512029Z" level=info msg="Started container" PID=1043 containerID=861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c description=kube-system/kube-proxy-ls9s9/kube-proxy id=06bde844-e929-4ec3-9d08-6f22d20c1158 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ea7e3bb303afa845350ac95e858300077f91a711fea31ec98366930ac4d2c76
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c833633bfd1f8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   4bb728ede5a1d       kindnet-btffs                               kube-system
	861164120a425       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   3ea7e3bb303af       kube-proxy-ls9s9                            kube-system
	792381e459509       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   9 seconds ago       Running             kube-apiserver            1                   3d7172889c974       kube-apiserver-newest-cni-418580            kube-system
	5aa99fc300b28       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   9 seconds ago       Running             kube-controller-manager   1                   50fd46bf653cf       kube-controller-manager-newest-cni-418580   kube-system
	6c35f39d09b5d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   9 seconds ago       Running             kube-scheduler            1                   46ebafdc58152       kube-scheduler-newest-cni-418580            kube-system
	31d4485dc7982       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   9 seconds ago       Running             etcd                      1                   9838b4a4f1a22       etcd-newest-cni-418580                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-418580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-418580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=newest-cni-418580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_05_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:05:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-418580
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:06:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 16 Dec 2025 05:06:05 +0000   Tue, 16 Dec 2025 05:05:39 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-418580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                964ef0bc-6b69-490b-a586-8c76d18cc2c2
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-418580                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-btffs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-newest-cni-418580             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-418580    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-ls9s9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-newest-cni-418580             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node newest-cni-418580 event: Registered Node newest-cni-418580 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-418580 event: Registered Node newest-cni-418580 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [31d4485dc7982aced65031b666c86beb9beea7f0add39a846111c271e1b90ccf] <==
	{"level":"warn","ts":"2025-12-16T05:06:09.616303Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.154962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-16T05:06:09.616460Z","caller":"traceutil/trace.go:172","msg":"trace[612266778] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:429; }","duration":"131.316604ms","start":"2025-12-16T05:06:09.485128Z","end":"2025-12-16T05:06:09.616445Z","steps":["trace[612266778] 'agreement among raft nodes before linearized reading'  (duration: 97.551586ms)","trace[612266778] 'range keys from in-memory index tree'  (duration: 33.495892ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:09.616501Z","caller":"traceutil/trace.go:172","msg":"trace[900879562] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"140.262437ms","start":"2025-12-16T05:06:09.476213Z","end":"2025-12-16T05:06:09.616476Z","steps":["trace[900879562] 'process raft request'  (duration: 106.489914ms)","trace[900879562] 'compare'  (duration: 33.529302ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:09.616631Z","caller":"traceutil/trace.go:172","msg":"trace[218378059] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"140.078753ms","start":"2025-12-16T05:06:09.476540Z","end":"2025-12-16T05:06:09.616619Z","steps":["trace[218378059] 'process raft request'  (duration: 139.843943ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616659Z","caller":"traceutil/trace.go:172","msg":"trace[1801022997] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"139.210678ms","start":"2025-12-16T05:06:09.477439Z","end":"2025-12-16T05:06:09.616650Z","steps":["trace[1801022997] 'process raft request'  (duration: 139.035017ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:09.616685Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.604516ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:06:09.616720Z","caller":"traceutil/trace.go:172","msg":"trace[2006365046] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:0; response_revision:435; }","duration":"112.647075ms","start":"2025-12-16T05:06:09.504064Z","end":"2025-12-16T05:06:09.616711Z","steps":["trace[2006365046] 'agreement among raft nodes before linearized reading'  (duration: 112.574977ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616810Z","caller":"traceutil/trace.go:172","msg":"trace[1090414356] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"139.102956ms","start":"2025-12-16T05:06:09.477700Z","end":"2025-12-16T05:06:09.616803Z","steps":["trace[1090414356] 'process raft request'  (duration: 138.843105ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616870Z","caller":"traceutil/trace.go:172","msg":"trace[2075775532] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"139.140012ms","start":"2025-12-16T05:06:09.477703Z","end":"2025-12-16T05:06:09.616843Z","steps":["trace[2075775532] 'process raft request'  (duration: 138.89728ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.616867Z","caller":"traceutil/trace.go:172","msg":"trace[944890944] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"139.918214ms","start":"2025-12-16T05:06:09.476937Z","end":"2025-12-16T05:06:09.616856Z","steps":["trace[944890944] 'process raft request'  (duration: 139.493103ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.844890Z","caller":"traceutil/trace.go:172","msg":"trace[1949662580] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"220.448708ms","start":"2025-12-16T05:06:09.624417Z","end":"2025-12-16T05:06:09.844866Z","steps":["trace[1949662580] 'process raft request'  (duration: 125.284953ms)","trace[1949662580] 'compare'  (duration: 94.925745ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:09.845823Z","caller":"traceutil/trace.go:172","msg":"trace[1415415350] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"217.884008ms","start":"2025-12-16T05:06:09.627926Z","end":"2025-12-16T05:06:09.845810Z","steps":["trace[1415415350] 'process raft request'  (duration: 217.80308ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.845850Z","caller":"traceutil/trace.go:172","msg":"trace[1869093682] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"218.768234ms","start":"2025-12-16T05:06:09.627067Z","end":"2025-12-16T05:06:09.845835Z","steps":["trace[1869093682] 'process raft request'  (duration: 218.632082ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:09.845978Z","caller":"traceutil/trace.go:172","msg":"trace[1585753833] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"218.987993ms","start":"2025-12-16T05:06:09.626982Z","end":"2025-12-16T05:06:09.845970Z","steps":["trace[1585753833] 'process raft request'  (duration: 218.65487ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.027906Z","caller":"traceutil/trace.go:172","msg":"trace[185913012] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"173.567334ms","start":"2025-12-16T05:06:09.854314Z","end":"2025-12-16T05:06:10.027881Z","steps":["trace[185913012] 'process raft request'  (duration: 173.47287ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.027970Z","caller":"traceutil/trace.go:172","msg":"trace[1896521222] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"157.770996ms","start":"2025-12-16T05:06:09.870184Z","end":"2025-12-16T05:06:10.027955Z","steps":["trace[1896521222] 'process raft request'  (duration: 157.741922ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.028143Z","caller":"traceutil/trace.go:172","msg":"trace[818602939] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"174.335241ms","start":"2025-12-16T05:06:09.853793Z","end":"2025-12-16T05:06:10.028128Z","steps":["trace[818602939] 'process raft request'  (duration: 152.003962ms)","trace[818602939] 'compare'  (duration: 21.878351ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:10.028218Z","caller":"traceutil/trace.go:172","msg":"trace[875561013] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"170.895044ms","start":"2025-12-16T05:06:09.857309Z","end":"2025-12-16T05:06:10.028204Z","steps":["trace[875561013] 'process raft request'  (duration: 170.552595ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.028269Z","caller":"traceutil/trace.go:172","msg":"trace[1541164148] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"170.89657ms","start":"2025-12-16T05:06:09.857361Z","end":"2025-12-16T05:06:10.028257Z","steps":["trace[1541164148] 'process raft request'  (duration: 170.530852ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.028236Z","caller":"traceutil/trace.go:172","msg":"trace[179833025] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"172.444289ms","start":"2025-12-16T05:06:09.855771Z","end":"2025-12-16T05:06:10.028215Z","steps":["trace[179833025] 'process raft request'  (duration: 172.048027ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:10.274808Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.255836ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597756146472515 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-b84665fb8.188199bde95edade\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-b84665fb8.188199bde95edade\" value_size:676 lease:499225719291696326 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-16T05:06:10.275131Z","caller":"traceutil/trace.go:172","msg":"trace[1828505244] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"243.622892ms","start":"2025-12-16T05:06:10.031486Z","end":"2025-12-16T05:06:10.275109Z","steps":["trace[1828505244] 'process raft request'  (duration: 121.963521ms)","trace[1828505244] 'compare'  (duration: 120.959649ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T05:06:10.275203Z","caller":"traceutil/trace.go:172","msg":"trace[1476405070] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"241.134484ms","start":"2025-12-16T05:06:10.034055Z","end":"2025-12-16T05:06:10.275189Z","steps":["trace[1476405070] 'process raft request'  (duration: 241.066903ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.275237Z","caller":"traceutil/trace.go:172","msg":"trace[1608356931] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"243.072448ms","start":"2025-12-16T05:06:10.032159Z","end":"2025-12-16T05:06:10.275231Z","steps":["trace[1608356931] 'process raft request'  (duration: 242.780955ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.275315Z","caller":"traceutil/trace.go:172","msg":"trace[1014788976] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"241.320568ms","start":"2025-12-16T05:06:10.033980Z","end":"2025-12-16T05:06:10.275301Z","steps":["trace[1014788976] 'process raft request'  (duration: 241.071385ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:06:13 up 48 min,  0 user,  load average: 4.79, 3.21, 2.13
	Linux newest-cni-418580 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c833633bfd1f815e298fd2988dbd61c2348f2a3afefc4699093ac7336d7a6696] <==
	I1216 05:06:07.133488       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:06:07.133781       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 05:06:07.133922       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:06:07.133943       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:06:07.133974       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:06:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:06:07.336234       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:06:07.336292       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:06:07.336329       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:06:07.337004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:06:07.733427       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:06:07.733531       1 metrics.go:72] Registering metrics
	I1216 05:06:07.735228       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [792381e45950963b0ae8db2f5ad977f211daaf0810b7a93dd2d909444e1821c4] <==
	I1216 05:06:05.645773       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 05:06:05.645848       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 05:06:05.660201       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1216 05:06:05.661336       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:06:05.663481       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:06:05.664215       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:05.664240       1 policy_source.go:248] refreshing policies
	I1216 05:06:05.681511       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 05:06:05.681632       1 aggregator.go:187] initial CRD sync complete...
	I1216 05:06:05.681675       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 05:06:05.681704       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:06:05.681735       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:06:05.683149       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:06:05.684554       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:06:06.003040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:06:06.037462       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:06:06.058381       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:06:06.065769       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:06:06.071731       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:06:06.124365       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.65.71"}
	I1216 05:06:06.140507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.148.69"}
	I1216 05:06:06.538844       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1216 05:06:09.142141       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:06:09.269435       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:06:09.477069       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5aa99fc300b28206b76480bb951b5c8dfa804ac51633f6fe3b8e67aa1be01b31] <==
	I1216 05:06:08.804772       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1216 05:06:08.797834       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797823       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797843       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797849       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797861       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797873       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797882       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797912       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797916       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797923       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797928       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797957       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797802       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.806473       1 range_allocator.go:177] "Sending events to api server"
	I1216 05:06:08.806498       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1216 05:06:08.806504       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:08.806511       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.797899       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.809610       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.814794       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:08.897545       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:08.897590       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1216 05:06:08.897598       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1216 05:06:08.915232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [861164120a42520743ba9a3cc2b033ed5f96e1a81daa27915de6b8956c3e4d1c] <==
	I1216 05:06:06.881650       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:06:06.949513       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:07.050082       1 shared_informer.go:377] "Caches are synced"
	I1216 05:06:07.050117       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 05:06:07.050258       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:06:07.070886       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:06:07.070940       1 server_linux.go:136] "Using iptables Proxier"
	I1216 05:06:07.076390       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:06:07.076833       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1216 05:06:07.076861       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:06:07.081600       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:06:07.081713       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:06:07.081802       1 config.go:200] "Starting service config controller"
	I1216 05:06:07.081872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:06:07.078676       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:06:07.081953       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:06:07.081924       1 config.go:309] "Starting node config controller"
	I1216 05:06:07.082069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:06:07.082096       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:06:07.181972       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:06:07.181985       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:06:07.182079       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6c35f39d09b5d69eece5daa0f5057bb2198649a6a6ddb2a863f25c31e0a13ce2] <==
	I1216 05:06:04.395346       1 serving.go:386] Generated self-signed cert in-memory
	W1216 05:06:05.587855       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:06:05.587890       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:06:05.587903       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:06:05.587924       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:06:05.628296       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1216 05:06:05.628414       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:06:05.631532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:06:05.631620       1 shared_informer.go:370] "Waiting for caches to sync"
	I1216 05:06:05.634346       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:06:05.634432       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:06:05.732637       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.686843     663 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.693916     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-418580\" already exists" pod="kube-system/kube-apiserver-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.693958     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.704625     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-418580\" already exists" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.706675     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.723455     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-418580\" already exists" pod="kube-system/kube-scheduler-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: I1216 05:06:05.723723     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-418580"
	Dec 16 05:06:05 newest-cni-418580 kubelet[663]: E1216 05:06:05.734039     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-418580\" already exists" pod="kube-system/etcd-newest-cni-418580"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.375722     663 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.382487     663 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-418580\" already exists" pod="kube-system/kube-controller-manager-newest-cni-418580"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.382589     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-418580" containerName="kube-controller-manager"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.467188     663 apiserver.go:52] "Watching apiserver"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.477057     663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.480998     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-xtables-lock\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481460     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-cni-cfg\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481498     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-xtables-lock\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481542     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbf8d374-0c38-4c64-ac2c-9a71db8b4223-lib-modules\") pod \"kube-proxy-ls9s9\" (UID: \"dbf8d374-0c38-4c64-ac2c-9a71db8b4223\") " pod="kube-system/kube-proxy-ls9s9"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: I1216 05:06:06.481589     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b00649-4160-4dab-bce1-83080b927bde-lib-modules\") pod \"kindnet-btffs\" (UID: \"54b00649-4160-4dab-bce1-83080b927bde\") " pod="kube-system/kindnet-btffs"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.540463     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-418580" containerName="etcd"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.540770     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-418580" containerName="kube-controller-manager"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.540840     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-418580" containerName="kube-scheduler"
	Dec 16 05:06:06 newest-cni-418580 kubelet[663]: E1216 05:06:06.541077     663 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-418580" containerName="kube-apiserver"
	Dec 16 05:06:07 newest-cni-418580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:06:08 newest-cni-418580 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:06:08 newest-cni-418580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418580 -n newest-cni-418580
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418580 -n newest-cni-418580: exit status 2 (322.100344ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-418580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m: exit status 1 (60.633403ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-67tdz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-zvx2d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fhc6m" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-418580 describe pod coredns-7d764666f9-67tdz storage-provisioner dashboard-metrics-scraper-867fb5f87b-zvx2d kubernetes-dashboard-b84665fb8-fhc6m: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-375252 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-375252 --alsologtostderr -v=1: exit status 80 (2.327394354s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-375252 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:06:29.634397  314496 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:06:29.634673  314496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:29.634684  314496 out.go:374] Setting ErrFile to fd 2...
	I1216 05:06:29.634690  314496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:29.634906  314496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:06:29.635200  314496 out.go:368] Setting JSON to false
	I1216 05:06:29.635221  314496 mustload.go:66] Loading cluster: default-k8s-diff-port-375252
	I1216 05:06:29.635665  314496 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:29.636117  314496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-375252 --format={{.State.Status}}
	I1216 05:06:29.655495  314496 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:06:29.655844  314496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:29.713004  314496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-16 05:06:29.703107004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:29.713586  314496 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-375252 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 05:06:29.715332  314496 out.go:179] * Pausing node default-k8s-diff-port-375252 ... 
	I1216 05:06:29.716431  314496 host.go:66] Checking if "default-k8s-diff-port-375252" exists ...
	I1216 05:06:29.716696  314496 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:29.716732  314496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-375252
	I1216 05:06:29.734975  314496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/default-k8s-diff-port-375252/id_rsa Username:docker}
	I1216 05:06:29.827900  314496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:29.854456  314496 pause.go:52] kubelet running: true
	I1216 05:06:29.854530  314496 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:30.025851  314496 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:30.025957  314496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:30.104381  314496 cri.go:89] found id: "aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1"
	I1216 05:06:30.104405  314496 cri.go:89] found id: "676776061bc68b44dab6f7b1af084b1b49c0ccdbceaf6cf33a84c892af410ff0"
	I1216 05:06:30.104410  314496 cri.go:89] found id: "fa05464129c4cd5d2065744b02dba6d46de76d42ffb639148532972ad4ace1c8"
	I1216 05:06:30.104414  314496 cri.go:89] found id: "d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb"
	I1216 05:06:30.104418  314496 cri.go:89] found id: "e04d861b0da300bb613395b582a587069a355bde3fc0afcfb3f515297db72226"
	I1216 05:06:30.104423  314496 cri.go:89] found id: "3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45"
	I1216 05:06:30.104427  314496 cri.go:89] found id: "fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3"
	I1216 05:06:30.104431  314496 cri.go:89] found id: "bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff"
	I1216 05:06:30.104436  314496 cri.go:89] found id: "5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01"
	I1216 05:06:30.104444  314496 cri.go:89] found id: "239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	I1216 05:06:30.104449  314496 cri.go:89] found id: "227abf9a6095bd5d2d1170608d9021974081bc4722607f7541d1e12fb67fe96c"
	I1216 05:06:30.104466  314496 cri.go:89] found id: ""
	I1216 05:06:30.104514  314496 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:30.118151  314496 retry.go:31] will retry after 247.047616ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:30Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:06:30.365425  314496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:30.380407  314496 pause.go:52] kubelet running: false
	I1216 05:06:30.380491  314496 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:30.530360  314496 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:30.530446  314496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:30.605576  314496 cri.go:89] found id: "aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1"
	I1216 05:06:30.605597  314496 cri.go:89] found id: "676776061bc68b44dab6f7b1af084b1b49c0ccdbceaf6cf33a84c892af410ff0"
	I1216 05:06:30.605601  314496 cri.go:89] found id: "fa05464129c4cd5d2065744b02dba6d46de76d42ffb639148532972ad4ace1c8"
	I1216 05:06:30.605605  314496 cri.go:89] found id: "d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb"
	I1216 05:06:30.605607  314496 cri.go:89] found id: "e04d861b0da300bb613395b582a587069a355bde3fc0afcfb3f515297db72226"
	I1216 05:06:30.605611  314496 cri.go:89] found id: "3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45"
	I1216 05:06:30.605614  314496 cri.go:89] found id: "fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3"
	I1216 05:06:30.605616  314496 cri.go:89] found id: "bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff"
	I1216 05:06:30.605618  314496 cri.go:89] found id: "5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01"
	I1216 05:06:30.605624  314496 cri.go:89] found id: "239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	I1216 05:06:30.605628  314496 cri.go:89] found id: "227abf9a6095bd5d2d1170608d9021974081bc4722607f7541d1e12fb67fe96c"
	I1216 05:06:30.605630  314496 cri.go:89] found id: ""
	I1216 05:06:30.605666  314496 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:30.617119  314496 retry.go:31] will retry after 254.225379ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:30Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:06:30.871548  314496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:30.885067  314496 pause.go:52] kubelet running: false
	I1216 05:06:30.885122  314496 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:31.032592  314496 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:31.032673  314496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:31.104317  314496 cri.go:89] found id: "aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1"
	I1216 05:06:31.104341  314496 cri.go:89] found id: "676776061bc68b44dab6f7b1af084b1b49c0ccdbceaf6cf33a84c892af410ff0"
	I1216 05:06:31.104346  314496 cri.go:89] found id: "fa05464129c4cd5d2065744b02dba6d46de76d42ffb639148532972ad4ace1c8"
	I1216 05:06:31.104352  314496 cri.go:89] found id: "d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb"
	I1216 05:06:31.104357  314496 cri.go:89] found id: "e04d861b0da300bb613395b582a587069a355bde3fc0afcfb3f515297db72226"
	I1216 05:06:31.104363  314496 cri.go:89] found id: "3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45"
	I1216 05:06:31.104367  314496 cri.go:89] found id: "fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3"
	I1216 05:06:31.104376  314496 cri.go:89] found id: "bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff"
	I1216 05:06:31.104380  314496 cri.go:89] found id: "5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01"
	I1216 05:06:31.104390  314496 cri.go:89] found id: "239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	I1216 05:06:31.104396  314496 cri.go:89] found id: "227abf9a6095bd5d2d1170608d9021974081bc4722607f7541d1e12fb67fe96c"
	I1216 05:06:31.104399  314496 cri.go:89] found id: ""
	I1216 05:06:31.104435  314496 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:31.116488  314496 retry.go:31] will retry after 527.604282ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:31Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:06:31.645240  314496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:31.659676  314496 pause.go:52] kubelet running: false
	I1216 05:06:31.659724  314496 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:06:31.807287  314496 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:06:31.807381  314496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:06:31.875638  314496 cri.go:89] found id: "aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1"
	I1216 05:06:31.875662  314496 cri.go:89] found id: "676776061bc68b44dab6f7b1af084b1b49c0ccdbceaf6cf33a84c892af410ff0"
	I1216 05:06:31.875684  314496 cri.go:89] found id: "fa05464129c4cd5d2065744b02dba6d46de76d42ffb639148532972ad4ace1c8"
	I1216 05:06:31.875690  314496 cri.go:89] found id: "d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb"
	I1216 05:06:31.875694  314496 cri.go:89] found id: "e04d861b0da300bb613395b582a587069a355bde3fc0afcfb3f515297db72226"
	I1216 05:06:31.875699  314496 cri.go:89] found id: "3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45"
	I1216 05:06:31.875703  314496 cri.go:89] found id: "fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3"
	I1216 05:06:31.875707  314496 cri.go:89] found id: "bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff"
	I1216 05:06:31.875711  314496 cri.go:89] found id: "5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01"
	I1216 05:06:31.875722  314496 cri.go:89] found id: "239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	I1216 05:06:31.875728  314496 cri.go:89] found id: "227abf9a6095bd5d2d1170608d9021974081bc4722607f7541d1e12fb67fe96c"
	I1216 05:06:31.875741  314496 cri.go:89] found id: ""
	I1216 05:06:31.875789  314496 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:06:31.889572  314496 out.go:203] 
	W1216 05:06:31.890908  314496 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:06:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 05:06:31.890932  314496 out.go:285] * 
	* 
	W1216 05:06:31.895603  314496 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:06:31.896739  314496 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-375252 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-375252
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-375252:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4",
	        "Created": "2025-12-16T05:04:21.317560531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293688,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:05:28.868892712Z",
	            "FinishedAt": "2025-12-16T05:05:26.017939664Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/hosts",
	        "LogPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4-json.log",
	        "Name": "/default-k8s-diff-port-375252",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-375252:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-375252",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4",
	                "LowerDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-375252",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-375252/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-375252",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-375252",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-375252",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c0434c79d016bebb8278e0c7422e8854fd5240240d0ce2fafb4d3f5551fbfa11",
	            "SandboxKey": "/var/run/docker/netns/c0434c79d016",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-375252": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b4cd001204df0b61e89eebe6a12657aa97766dbbe6ee158bf1c23f85315b1b70",
	                    "EndpointID": "96351fa7711459eae73f4b9ac5e4eb9bea44b2322848dccc5aea78ebc2e4c080",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:26:0c:8d:5f:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-375252",
	                        "f7855ebd974c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252: exit status 2 (335.959614ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-375252 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-375252 logs -n 25: (1.234190204s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:06 UTC │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p newest-cni-418580 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-418580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:06 UTC │
	│ image   │ embed-certs-127599 image list --format=json                                                                                                                                                                                                          │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p embed-certs-127599 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p auto-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-548714                  │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ start   │ -p kindnet-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-548714               │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ image   │ newest-cni-418580 image list --format=json                                                                                                                                                                                                           │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ pause   │ -p newest-cni-418580 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ delete  │ -p newest-cni-418580                                                                                                                                                                                                                                 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ delete  │ -p newest-cni-418580                                                                                                                                                                                                                                 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ start   │ -p calico-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                               │ calico-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ image   │ default-k8s-diff-port-375252 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ pause   │ -p default-k8s-diff-port-375252 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:06:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:06:16.734529  311037 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:06:16.734644  311037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:16.734654  311037 out.go:374] Setting ErrFile to fd 2...
	I1216 05:06:16.734661  311037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:16.734977  311037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:06:16.735504  311037 out.go:368] Setting JSON to false
	I1216 05:06:16.736755  311037 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2928,"bootTime":1765858649,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:06:16.736807  311037 start.go:143] virtualization: kvm guest
	I1216 05:06:16.738874  311037 out.go:179] * [calico-548714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:06:16.740144  311037 notify.go:221] Checking for updates...
	I1216 05:06:16.740281  311037 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:06:16.741579  311037 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:06:16.742859  311037 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:06:16.744169  311037 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:06:16.748495  311037 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:06:16.749660  311037 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:06:16.751399  311037 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:16.751528  311037 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:16.751616  311037 config.go:182] Loaded profile config "kindnet-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:16.751695  311037 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:06:16.774944  311037 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:06:16.775059  311037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:16.835259  311037 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:16.823973244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:16.835379  311037 docker.go:319] overlay module found
	I1216 05:06:16.837933  311037 out.go:179] * Using the docker driver based on user configuration
	I1216 05:06:16.839115  311037 start.go:309] selected driver: docker
	I1216 05:06:16.839129  311037 start.go:927] validating driver "docker" against <nil>
	I1216 05:06:16.839139  311037 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:06:16.839652  311037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:16.909172  311037 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:16.892445715 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:16.909326  311037 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:06:16.909634  311037 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:06:16.911337  311037 out.go:179] * Using Docker driver with root privileges
	I1216 05:06:16.912602  311037 cni.go:84] Creating CNI manager for "calico"
	I1216 05:06:16.912619  311037 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1216 05:06:16.912684  311037 start.go:353] cluster config:
	{Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:16.914286  311037 out.go:179] * Starting "calico-548714" primary control-plane node in "calico-548714" cluster
	I1216 05:06:16.915336  311037 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:06:16.916400  311037 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:06:16.917468  311037 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:16.917505  311037 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:06:16.917530  311037 cache.go:65] Caching tarball of preloaded images
	I1216 05:06:16.917591  311037 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:06:16.917631  311037 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:06:16.917649  311037 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:06:16.917764  311037 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/config.json ...
	I1216 05:06:16.917790  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/config.json: {Name:mkf46a9345d8db3aaffd3f2e9c58dd5028fcc8df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:16.938878  311037 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:06:16.938894  311037 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:06:16.938913  311037 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:06:16.938952  311037 start.go:360] acquireMachinesLock for calico-548714: {Name:mk0fe94f066db6459e497304b80f7300cf7e5cc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:06:16.939059  311037 start.go:364] duration metric: took 88.339µs to acquireMachinesLock for "calico-548714"
	I1216 05:06:16.939096  311037 start.go:93] Provisioning new machine with config: &{Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:06:16.939178  311037 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:06:15.497041  306433 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:06:15.497170  306433 machine.go:97] duration metric: took 4.200677815s to provisionDockerMachine
	I1216 05:06:15.497216  306433 client.go:176] duration metric: took 9.757109246s to LocalClient.Create
	I1216 05:06:15.497265  306433 start.go:167] duration metric: took 9.757207317s to libmachine.API.Create "kindnet-548714"
	I1216 05:06:15.497321  306433 start.go:293] postStartSetup for "kindnet-548714" (driver="docker")
	I1216 05:06:15.497364  306433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:06:15.497466  306433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:06:15.497530  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.518944  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:15.621008  306433 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:06:15.624866  306433 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:06:15.624899  306433 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:06:15.624912  306433 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:06:15.624966  306433 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:06:15.625082  306433 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:06:15.625246  306433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:06:15.635460  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:15.658133  306433 start.go:296] duration metric: took 160.754363ms for postStartSetup
	I1216 05:06:15.658562  306433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-548714
	I1216 05:06:15.679635  306433 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/config.json ...
	I1216 05:06:15.679955  306433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:06:15.680008  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.704489  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:15.809970  306433 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:06:15.816138  306433 start.go:128] duration metric: took 10.078151931s to createHost
	I1216 05:06:15.816163  306433 start.go:83] releasing machines lock for "kindnet-548714", held for 10.078288306s
	I1216 05:06:15.816224  306433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-548714
	I1216 05:06:15.837056  306433 ssh_runner.go:195] Run: cat /version.json
	I1216 05:06:15.837102  306433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:06:15.837118  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.837174  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.864111  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:15.866757  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:16.040009  306433 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:16.046714  306433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:06:16.106078  306433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:06:16.113847  306433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:06:16.114065  306433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:06:16.157848  306433 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:06:16.157874  306433 start.go:496] detecting cgroup driver to use...
	I1216 05:06:16.157910  306433 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:06:16.157959  306433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:06:16.177412  306433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:06:16.193694  306433 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:06:16.193763  306433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:06:16.229481  306433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:06:16.268280  306433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:06:16.368564  306433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:06:16.492701  306433 docker.go:234] disabling docker service ...
	I1216 05:06:16.492819  306433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:06:16.520731  306433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:06:16.537336  306433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:06:16.639769  306433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:06:16.736158  306433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:06:16.749853  306433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:06:16.764897  306433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:06:16.764960  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.776640  306433 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:06:16.776700  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.785807  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.797915  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.808615  306433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:06:16.817330  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.826109  306433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.841445  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.851047  306433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:06:16.859445  306433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:06:16.867716  306433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:16.953613  306433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:06:17.104980  306433 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:06:17.105058  306433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:06:17.109011  306433 start.go:564] Will wait 60s for crictl version
	I1216 05:06:17.109131  306433 ssh_runner.go:195] Run: which crictl
	I1216 05:06:17.113169  306433 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:06:17.146413  306433 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:06:17.146493  306433 ssh_runner.go:195] Run: crio --version
	I1216 05:06:17.177083  306433 ssh_runner.go:195] Run: crio --version
	I1216 05:06:17.216574  306433 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1216 05:06:14.821776  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	I1216 05:06:16.321706  293223 pod_ready.go:94] pod "coredns-66bc5c9577-65x5z" is "Ready"
	I1216 05:06:16.321734  293223 pod_ready.go:86] duration metric: took 36.506354154s for pod "coredns-66bc5c9577-65x5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.324422  293223 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.328510  293223 pod_ready.go:94] pod "etcd-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:16.328533  293223 pod_ready.go:86] duration metric: took 4.088428ms for pod "etcd-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.330541  293223 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.334327  293223 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:16.334347  293223 pod_ready.go:86] duration metric: took 3.786455ms for pod "kube-apiserver-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.336390  293223 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.519906  293223 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:16.519941  293223 pod_ready.go:86] duration metric: took 183.531316ms for pod "kube-controller-manager-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.719181  293223 pod_ready.go:83] waiting for pod "kube-proxy-dmw9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.119695  293223 pod_ready.go:94] pod "kube-proxy-dmw9t" is "Ready"
	I1216 05:06:17.119726  293223 pod_ready.go:86] duration metric: took 400.513863ms for pod "kube-proxy-dmw9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.319881  293223 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.719687  293223 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:17.719712  293223 pod_ready.go:86] duration metric: took 399.802458ms for pod "kube-scheduler-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.719727  293223 pod_ready.go:40] duration metric: took 37.907937353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:06:17.783977  293223 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:06:17.787906  293223 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-375252" cluster and "default" namespace by default
	I1216 05:06:15.182180  302822 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.687219927s
	I1216 05:06:16.667772  302822 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.172825206s
	I1216 05:06:17.996485  302822 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502174664s
	I1216 05:06:18.016000  302822 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 05:06:18.028192  302822 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 05:06:18.038577  302822 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 05:06:18.038906  302822 kubeadm.go:319] [mark-control-plane] Marking the node auto-548714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 05:06:18.047842  302822 kubeadm.go:319] [bootstrap-token] Using token: c919d0.lrukntq29vxntbkj
	I1216 05:06:18.049600  302822 out.go:252]   - Configuring RBAC rules ...
	I1216 05:06:18.049740  302822 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 05:06:18.052707  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 05:06:18.058416  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 05:06:18.060948  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 05:06:18.064603  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 05:06:18.067795  302822 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 05:06:18.404066  302822 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 05:06:18.821113  302822 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 05:06:19.404085  302822 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 05:06:19.405296  302822 kubeadm.go:319] 
	I1216 05:06:19.405370  302822 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 05:06:19.405383  302822 kubeadm.go:319] 
	I1216 05:06:19.405471  302822 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 05:06:19.405482  302822 kubeadm.go:319] 
	I1216 05:06:19.405507  302822 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 05:06:19.405639  302822 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 05:06:19.405726  302822 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 05:06:19.405740  302822 kubeadm.go:319] 
	I1216 05:06:19.405786  302822 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 05:06:19.405793  302822 kubeadm.go:319] 
	I1216 05:06:19.405833  302822 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 05:06:19.405840  302822 kubeadm.go:319] 
	I1216 05:06:19.405905  302822 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 05:06:19.405973  302822 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 05:06:19.406064  302822 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 05:06:19.406075  302822 kubeadm.go:319] 
	I1216 05:06:19.406199  302822 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 05:06:19.406324  302822 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 05:06:19.406336  302822 kubeadm.go:319] 
	I1216 05:06:19.406458  302822 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c919d0.lrukntq29vxntbkj \
	I1216 05:06:19.406682  302822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 05:06:19.406722  302822 kubeadm.go:319] 	--control-plane 
	I1216 05:06:19.406746  302822 kubeadm.go:319] 
	I1216 05:06:19.406862  302822 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 05:06:19.406873  302822 kubeadm.go:319] 
	I1216 05:06:19.406984  302822 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c919d0.lrukntq29vxntbkj \
	I1216 05:06:19.407158  302822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 05:06:19.410660  302822 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:06:19.410825  302822 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:06:19.410877  302822 cni.go:84] Creating CNI manager for ""
	I1216 05:06:19.410900  302822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:06:19.417866  302822 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 05:06:17.217863  306433 cli_runner.go:164] Run: docker network inspect kindnet-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:17.242809  306433 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 05:06:17.247892  306433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:17.260512  306433 kubeadm.go:884] updating cluster {Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:06:17.260654  306433 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:17.260715  306433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:17.301982  306433 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:17.302006  306433 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:06:17.302072  306433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:17.334803  306433 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:17.334823  306433 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:06:17.334831  306433 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1216 05:06:17.334914  306433 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-548714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 05:06:17.335066  306433 ssh_runner.go:195] Run: crio config
	I1216 05:06:17.401567  306433 cni.go:84] Creating CNI manager for "kindnet"
	I1216 05:06:17.401604  306433 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:06:17.401635  306433 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-548714 NodeName:kindnet-548714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:06:17.401828  306433 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-548714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:06:17.401909  306433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:06:17.413779  306433 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:06:17.413870  306433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:06:17.423610  306433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1216 05:06:17.438889  306433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:06:17.456025  306433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 05:06:17.497009  306433 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:06:17.501749  306433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:17.519229  306433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:17.628202  306433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:17.653778  306433 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714 for IP: 192.168.94.2
	I1216 05:06:17.653801  306433 certs.go:195] generating shared ca certs ...
	I1216 05:06:17.653821  306433 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.654153  306433 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:06:17.654247  306433 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:06:17.654268  306433 certs.go:257] generating profile certs ...
	I1216 05:06:17.654346  306433 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.key
	I1216 05:06:17.654462  306433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.crt with IP's: []
	I1216 05:06:17.764008  306433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.crt ...
	I1216 05:06:17.764084  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.crt: {Name:mke6944cd5e230ef5b9c0df0e1fbf4ee9f9f453c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.764282  306433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.key ...
	I1216 05:06:17.764294  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.key: {Name:mk8d4989a112cae82c25474cfa910e701c396ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.764403  306433 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897
	I1216 05:06:17.764473  306433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1216 05:06:17.938097  306433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897 ...
	I1216 05:06:17.938134  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897: {Name:mk5ff47cdfc0871e37bce8ce81b7bea99e8e8f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.938349  306433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897 ...
	I1216 05:06:17.938375  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897: {Name:mk8db4b77e31ec1c99c7a14549292caf6d8852d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.938525  306433 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt
	I1216 05:06:17.938659  306433 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key
	I1216 05:06:17.938758  306433 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key
	I1216 05:06:17.938779  306433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt with IP's: []
	I1216 05:06:18.065161  306433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt ...
	I1216 05:06:18.065192  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt: {Name:mk7f853cc48a13763c5bd43f9210b1937965d312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:18.065392  306433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key ...
	I1216 05:06:18.065413  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key: {Name:mk34f0bd0856b976828b25c35600a353c132b27d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:18.065682  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:06:18.065732  306433 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:06:18.065746  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:06:18.065794  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:06:18.065827  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:06:18.065861  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:06:18.065924  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:18.066740  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:06:18.088319  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:06:18.114943  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:06:18.138725  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:06:18.158609  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:06:18.178437  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:06:18.199468  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:06:18.218734  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:06:18.240736  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:06:18.262830  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:06:18.286555  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:06:18.308449  306433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:06:18.322890  306433 ssh_runner.go:195] Run: openssl version
	I1216 05:06:18.328941  306433 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.336339  306433 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:06:18.343938  306433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.347550  306433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.347615  306433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.383935  306433 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:06:18.392410  306433 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:06:18.400706  306433 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.410124  306433 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:06:18.419200  306433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.423776  306433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.423846  306433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.476667  306433 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:06:18.486508  306433 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:06:18.494575  306433 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.503452  306433 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:06:18.513209  306433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.517897  306433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.517962  306433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.563791  306433 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:18.574433  306433 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:18.583769  306433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:06:18.587910  306433 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:06:18.587995  306433 kubeadm.go:401] StartCluster: {Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:18.588108  306433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:06:18.588183  306433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:06:18.620557  306433 cri.go:89] found id: ""
	I1216 05:06:18.620636  306433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:06:18.630816  306433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:18.640929  306433 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:18.640996  306433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:18.654637  306433 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:18.654664  306433 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:18.654717  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:06:18.667284  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:18.667613  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:18.677865  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:06:18.688365  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:18.688428  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:18.696184  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:06:18.704206  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:18.704273  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:18.711841  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:06:18.719852  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:18.719916  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:18.728595  306433 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:18.798195  306433 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:06:18.866578  306433 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:06:16.941590  311037 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:06:16.941833  311037 start.go:159] libmachine.API.Create for "calico-548714" (driver="docker")
	I1216 05:06:16.941872  311037 client.go:173] LocalClient.Create starting
	I1216 05:06:16.941956  311037 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:06:16.941994  311037 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:16.942039  311037 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:16.942119  311037 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:06:16.942147  311037 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:16.942184  311037 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:16.942614  311037 cli_runner.go:164] Run: docker network inspect calico-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:06:16.959740  311037 cli_runner.go:211] docker network inspect calico-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:06:16.959838  311037 network_create.go:284] running [docker network inspect calico-548714] to gather additional debugging logs...
	I1216 05:06:16.959870  311037 cli_runner.go:164] Run: docker network inspect calico-548714
	W1216 05:06:16.978280  311037 cli_runner.go:211] docker network inspect calico-548714 returned with exit code 1
	I1216 05:06:16.978307  311037 network_create.go:287] error running [docker network inspect calico-548714]: docker network inspect calico-548714: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-548714 not found
	I1216 05:06:16.978323  311037 network_create.go:289] output of [docker network inspect calico-548714]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-548714 not found
	
	** /stderr **
	I1216 05:06:16.978435  311037 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:16.995573  311037 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:06:16.996483  311037 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:06:16.997211  311037 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:06:16.997727  311037 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4cd001204df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:4c:f5:dc:0a:90} reservation:<nil>}
	I1216 05:06:16.998540  311037 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f80fe0}
	I1216 05:06:16.998564  311037 network_create.go:124] attempt to create docker network calico-548714 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 05:06:16.998621  311037 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-548714 calico-548714
	I1216 05:06:17.050112  311037 network_create.go:108] docker network calico-548714 192.168.85.0/24 created
	I1216 05:06:17.050149  311037 kic.go:121] calculated static IP "192.168.85.2" for the "calico-548714" container
	I1216 05:06:17.050245  311037 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:06:17.068873  311037 cli_runner.go:164] Run: docker volume create calico-548714 --label name.minikube.sigs.k8s.io=calico-548714 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:06:17.086743  311037 oci.go:103] Successfully created a docker volume calico-548714
	I1216 05:06:17.086832  311037 cli_runner.go:164] Run: docker run --rm --name calico-548714-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-548714 --entrypoint /usr/bin/test -v calico-548714:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:06:17.543293  311037 oci.go:107] Successfully prepared a docker volume calico-548714
	I1216 05:06:17.543367  311037 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:17.543383  311037 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:06:17.543471  311037 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-548714:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:06:19.419289  302822 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 05:06:19.424959  302822 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 05:06:19.424983  302822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 05:06:19.438654  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 05:06:19.683312  302822 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:06:19.683465  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:19.683527  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-548714 minikube.k8s.io/updated_at=2025_12_16T05_06_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=auto-548714 minikube.k8s.io/primary=true
	I1216 05:06:19.697611  302822 ops.go:34] apiserver oom_adj: -16
	I1216 05:06:19.780117  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:20.280173  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:20.780155  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:21.280961  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:21.780143  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:22.280141  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:22.780159  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:23.280162  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:23.780167  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:23.865905  302822 kubeadm.go:1114] duration metric: took 4.182483113s to wait for elevateKubeSystemPrivileges
	I1216 05:06:23.865948  302822 kubeadm.go:403] duration metric: took 15.931077528s to StartCluster
	I1216 05:06:23.865968  302822 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:23.866060  302822 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:06:23.867370  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:23.867632  302822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:06:23.867648  302822 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:06:23.867629  302822 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:06:23.867722  302822 addons.go:70] Setting storage-provisioner=true in profile "auto-548714"
	I1216 05:06:23.867736  302822 addons.go:239] Setting addon storage-provisioner=true in "auto-548714"
	I1216 05:06:23.867755  302822 addons.go:70] Setting default-storageclass=true in profile "auto-548714"
	I1216 05:06:23.867871  302822 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-548714"
	I1216 05:06:23.867951  302822 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:23.867759  302822 host.go:66] Checking if "auto-548714" exists ...
	I1216 05:06:23.868262  302822 cli_runner.go:164] Run: docker container inspect auto-548714 --format={{.State.Status}}
	I1216 05:06:23.868582  302822 cli_runner.go:164] Run: docker container inspect auto-548714 --format={{.State.Status}}
	I1216 05:06:23.869152  302822 out.go:179] * Verifying Kubernetes components...
	I1216 05:06:23.873173  302822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:23.896297  302822 addons.go:239] Setting addon default-storageclass=true in "auto-548714"
	I1216 05:06:23.896341  302822 host.go:66] Checking if "auto-548714" exists ...
	I1216 05:06:23.896800  302822 cli_runner.go:164] Run: docker container inspect auto-548714 --format={{.State.Status}}
	I1216 05:06:23.897715  302822 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:06:23.899182  302822 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:06:23.899218  302822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:06:23.899277  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:23.921406  302822 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:06:23.921437  302822 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:06:23.921522  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:23.933076  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:23.958959  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:23.995816  302822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:06:24.045391  302822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:24.072281  302822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:06:24.090124  302822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:06:24.214132  302822 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1216 05:06:24.215075  302822 node_ready.go:35] waiting up to 15m0s for node "auto-548714" to be "Ready" ...
	I1216 05:06:24.510688  302822 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 05:06:21.778132  311037 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-548714:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.234593118s)
	I1216 05:06:21.778171  311037 kic.go:203] duration metric: took 4.234784476s to extract preloaded images to volume ...
	W1216 05:06:21.778283  311037 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:06:21.778322  311037 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:06:21.778368  311037 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:06:21.840611  311037 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-548714 --name calico-548714 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-548714 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-548714 --network calico-548714 --ip 192.168.85.2 --volume calico-548714:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 05:06:22.127725  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Running}}
	I1216 05:06:22.147710  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Status}}
	I1216 05:06:22.166077  311037 cli_runner.go:164] Run: docker exec calico-548714 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:06:22.217549  311037 oci.go:144] the created container "calico-548714" has a running status.
	I1216 05:06:22.217581  311037 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa...
	I1216 05:06:22.288828  311037 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:06:22.322868  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Status}}
	I1216 05:06:22.346494  311037 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:06:22.346517  311037 kic_runner.go:114] Args: [docker exec --privileged calico-548714 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:06:22.400805  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Status}}
	I1216 05:06:22.418403  311037 machine.go:94] provisionDockerMachine start ...
	I1216 05:06:22.418504  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:22.440724  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:22.441599  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:22.441617  311037 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:06:22.442503  311037 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51062->127.0.0.1:33125: read: connection reset by peer
	I1216 05:06:25.575651  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-548714
	
	I1216 05:06:25.575690  311037 ubuntu.go:182] provisioning hostname "calico-548714"
	I1216 05:06:25.575774  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:25.597345  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:25.597658  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:25.597680  311037 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-548714 && echo "calico-548714" | sudo tee /etc/hostname
	I1216 05:06:25.741526  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-548714
	
	I1216 05:06:25.741614  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:25.769474  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:25.769788  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:25.769812  311037 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-548714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-548714/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-548714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:06:25.914142  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:06:25.914170  311037 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:06:25.914191  311037 ubuntu.go:190] setting up certificates
	I1216 05:06:25.914203  311037 provision.go:84] configureAuth start
	I1216 05:06:25.914264  311037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-548714
	I1216 05:06:25.940676  311037 provision.go:143] copyHostCerts
	I1216 05:06:25.940779  311037 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:06:25.940799  311037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:06:25.940870  311037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:06:25.940987  311037 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:06:25.941009  311037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:06:25.941083  311037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:06:25.941174  311037 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:06:25.941193  311037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:06:25.941221  311037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:06:25.941285  311037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.calico-548714 san=[127.0.0.1 192.168.85.2 calico-548714 localhost minikube]
	I1216 05:06:26.050252  311037 provision.go:177] copyRemoteCerts
	I1216 05:06:26.050314  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:06:26.050350  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.068759  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.166814  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:06:26.187222  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 05:06:26.205248  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:06:26.222721  311037 provision.go:87] duration metric: took 308.499904ms to configureAuth
	I1216 05:06:26.222747  311037 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:06:26.222932  311037 config.go:182] Loaded profile config "calico-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:26.223071  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.241919  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:26.242202  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:26.242223  311037 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:06:26.523604  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:06:26.523638  311037 machine.go:97] duration metric: took 4.105199142s to provisionDockerMachine
	I1216 05:06:26.523649  311037 client.go:176] duration metric: took 9.581767661s to LocalClient.Create
	I1216 05:06:26.523666  311037 start.go:167] duration metric: took 9.581842767s to libmachine.API.Create "calico-548714"
	I1216 05:06:26.523672  311037 start.go:293] postStartSetup for "calico-548714" (driver="docker")
	I1216 05:06:26.523681  311037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:06:26.523732  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:06:26.523768  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.541901  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.635903  311037 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:06:26.639528  311037 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:06:26.639553  311037 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:06:26.639569  311037 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:06:26.639610  311037 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:06:26.639685  311037 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:06:26.639788  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:06:26.647230  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:26.666394  311037 start.go:296] duration metric: took 142.707698ms for postStartSetup
	I1216 05:06:26.666713  311037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-548714
	I1216 05:06:26.684831  311037 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/config.json ...
	I1216 05:06:26.685117  311037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:06:26.685165  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.703305  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.793366  311037 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:06:26.797904  311037 start.go:128] duration metric: took 9.858709s to createHost
	I1216 05:06:26.797929  311037 start.go:83] releasing machines lock for "calico-548714", held for 9.858855061s
	I1216 05:06:26.798041  311037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-548714
	I1216 05:06:26.815421  311037 ssh_runner.go:195] Run: cat /version.json
	I1216 05:06:26.815458  311037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:06:26.815465  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.815534  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.834742  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.835055  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.993569  311037 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:27.001626  311037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:06:27.042085  311037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:06:27.047643  311037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:06:27.047701  311037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:06:27.076449  311037 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:06:27.076479  311037 start.go:496] detecting cgroup driver to use...
	I1216 05:06:27.076519  311037 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:06:27.076569  311037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:06:27.095614  311037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:06:27.110816  311037 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:06:27.110875  311037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:06:27.130105  311037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:06:27.151195  311037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:06:27.256209  311037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:06:27.365188  311037 docker.go:234] disabling docker service ...
	I1216 05:06:27.365258  311037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:06:27.386898  311037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:06:27.402046  311037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:06:27.488051  311037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:06:27.575795  311037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:06:27.588361  311037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:06:27.602470  311037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:06:27.602535  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.612651  311037 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:06:27.612713  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.621140  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.629554  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.637808  311037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:06:27.645741  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.654455  311037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.667273  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.675658  311037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:06:27.683045  311037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:06:27.690868  311037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:27.779556  311037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:06:27.921683  311037 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:06:27.921750  311037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:06:27.925794  311037 start.go:564] Will wait 60s for crictl version
	I1216 05:06:27.925847  311037 ssh_runner.go:195] Run: which crictl
	I1216 05:06:27.930031  311037 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:06:27.961214  311037 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:06:27.961290  311037 ssh_runner.go:195] Run: crio --version
	I1216 05:06:27.989664  311037 ssh_runner.go:195] Run: crio --version
	I1216 05:06:28.020610  311037 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:06:24.512090  302822 addons.go:530] duration metric: took 644.433155ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:06:24.718081  302822 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-548714" context rescaled to 1 replicas
	W1216 05:06:26.218757  302822 node_ready.go:57] node "auto-548714" has "Ready":"False" status (will retry)
	W1216 05:06:28.720204  302822 node_ready.go:57] node "auto-548714" has "Ready":"False" status (will retry)
	I1216 05:06:29.092327  306433 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 05:06:29.092398  306433 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:06:29.092517  306433 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:06:29.092593  306433 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:06:29.092645  306433 kubeadm.go:319] OS: Linux
	I1216 05:06:29.092736  306433 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:06:29.092823  306433 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:06:29.092893  306433 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:06:29.092966  306433 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:06:29.093055  306433 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:06:29.093108  306433 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:06:29.093169  306433 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:06:29.093228  306433 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:06:29.093361  306433 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:06:29.093512  306433 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:06:29.093654  306433 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:06:29.093725  306433 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:06:29.095625  306433 out.go:252]   - Generating certificates and keys ...
	I1216 05:06:29.095718  306433 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:06:29.095801  306433 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:06:29.095877  306433 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:06:29.095994  306433 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:06:29.096118  306433 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:06:29.096196  306433 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:06:29.096351  306433 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:06:29.096537  306433 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-548714 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 05:06:29.096592  306433 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:06:29.096700  306433 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-548714 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 05:06:29.096768  306433 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 05:06:29.096840  306433 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 05:06:29.096888  306433 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 05:06:29.096959  306433 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:06:29.097001  306433 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:06:29.097086  306433 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:06:29.097173  306433 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:06:29.097265  306433 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:06:29.097341  306433 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:06:29.097444  306433 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:06:29.097540  306433 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:06:29.098840  306433 out.go:252]   - Booting up control plane ...
	I1216 05:06:29.098929  306433 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:06:29.099039  306433 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:06:29.099121  306433 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:06:29.099270  306433 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:06:29.099424  306433 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:06:29.099534  306433 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:06:29.099640  306433 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:06:29.099716  306433 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:06:29.099903  306433 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:06:29.100060  306433 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:06:29.100109  306433 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001053555s
	I1216 05:06:29.100196  306433 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 05:06:29.100292  306433 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1216 05:06:29.100424  306433 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 05:06:29.100531  306433 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 05:06:29.100630  306433 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.922817936s
	I1216 05:06:29.100719  306433 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.266581857s
	I1216 05:06:29.100810  306433 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001430761s
	I1216 05:06:29.101001  306433 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 05:06:29.101184  306433 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 05:06:29.101258  306433 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 05:06:29.101464  306433 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-548714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 05:06:29.101533  306433 kubeadm.go:319] [bootstrap-token] Using token: mzm2oc.3ozg1p2oxw32434g
	I1216 05:06:29.102723  306433 out.go:252]   - Configuring RBAC rules ...
	I1216 05:06:29.102840  306433 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 05:06:29.102938  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 05:06:29.103127  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 05:06:29.103322  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 05:06:29.103504  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 05:06:29.103633  306433 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 05:06:29.103794  306433 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 05:06:29.103837  306433 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 05:06:29.103904  306433 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 05:06:29.103910  306433 kubeadm.go:319] 
	I1216 05:06:29.103988  306433 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 05:06:29.103995  306433 kubeadm.go:319] 
	I1216 05:06:29.104095  306433 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 05:06:29.104104  306433 kubeadm.go:319] 
	I1216 05:06:29.104138  306433 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 05:06:29.104211  306433 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 05:06:29.104287  306433 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 05:06:29.104300  306433 kubeadm.go:319] 
	I1216 05:06:29.104389  306433 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 05:06:29.104399  306433 kubeadm.go:319] 
	I1216 05:06:29.104480  306433 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 05:06:29.104493  306433 kubeadm.go:319] 
	I1216 05:06:29.104553  306433 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 05:06:29.104652  306433 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 05:06:29.104745  306433 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 05:06:29.104752  306433 kubeadm.go:319] 
	I1216 05:06:29.104858  306433 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 05:06:29.104954  306433 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 05:06:29.104968  306433 kubeadm.go:319] 
	I1216 05:06:29.105091  306433 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mzm2oc.3ozg1p2oxw32434g \
	I1216 05:06:29.105200  306433 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 05:06:29.105227  306433 kubeadm.go:319] 	--control-plane 
	I1216 05:06:29.105236  306433 kubeadm.go:319] 
	I1216 05:06:29.105364  306433 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 05:06:29.105385  306433 kubeadm.go:319] 
	I1216 05:06:29.105462  306433 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mzm2oc.3ozg1p2oxw32434g \
	I1216 05:06:29.105561  306433 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 05:06:29.105570  306433 cni.go:84] Creating CNI manager for "kindnet"
	I1216 05:06:29.107887  306433 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 05:06:28.021806  311037 cli_runner.go:164] Run: docker network inspect calico-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:28.043564  311037 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:06:28.047842  311037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:28.058029  311037 kubeadm.go:884] updating cluster {Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:06:28.058173  311037 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:28.058215  311037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:28.089868  311037 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:28.089888  311037 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:06:28.089944  311037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:28.120238  311037 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:28.120274  311037 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:06:28.120284  311037 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1216 05:06:28.120392  311037 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-548714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1216 05:06:28.120481  311037 ssh_runner.go:195] Run: crio config
	I1216 05:06:28.171404  311037 cni.go:84] Creating CNI manager for "calico"
	I1216 05:06:28.171434  311037 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:06:28.171472  311037 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-548714 NodeName:calico-548714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:06:28.171631  311037 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-548714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:06:28.171714  311037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:06:28.180143  311037 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:06:28.180206  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:06:28.188158  311037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 05:06:28.201802  311037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:06:28.217990  311037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 05:06:28.231559  311037 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:06:28.236465  311037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:28.247524  311037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:28.354965  311037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:28.381482  311037 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714 for IP: 192.168.85.2
	I1216 05:06:28.381507  311037 certs.go:195] generating shared ca certs ...
	I1216 05:06:28.381525  311037 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.381671  311037 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:06:28.381726  311037 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:06:28.381741  311037 certs.go:257] generating profile certs ...
	I1216 05:06:28.381795  311037 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.key
	I1216 05:06:28.381811  311037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.crt with IP's: []
	I1216 05:06:28.427750  311037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.crt ...
	I1216 05:06:28.427782  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.crt: {Name:mke1a5169816bd7f9795fdf5df1d0b050bbcd1b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.427953  311037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.key ...
	I1216 05:06:28.427964  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.key: {Name:mk8dd9b5942e6166ddefffd1b9b9315328d90f26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.428070  311037 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3
	I1216 05:06:28.428085  311037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1216 05:06:28.555207  311037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3 ...
	I1216 05:06:28.555234  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3: {Name:mk6cabbd9369101497ce2b91312177a7223ca3f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.555407  311037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3 ...
	I1216 05:06:28.555419  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3: {Name:mk63a2876c3fba88d6e89c1572dae03732d34e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.555492  311037 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt
	I1216 05:06:28.555575  311037 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key
	I1216 05:06:28.555628  311037 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key
	I1216 05:06:28.555642  311037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt with IP's: []
	I1216 05:06:28.620664  311037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt ...
	I1216 05:06:28.620689  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt: {Name:mk47ced84380f06c3cafcd86283bb954b01bc3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.620837  311037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key ...
	I1216 05:06:28.620848  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key: {Name:mk482284df6651ebb6b5b1982bad63eda7eff263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.621015  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:06:28.621065  311037 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:06:28.621074  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:06:28.621097  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:06:28.621121  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:06:28.621146  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:06:28.621194  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:28.621715  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:06:28.640424  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:06:28.657804  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:06:28.675307  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:06:28.694121  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:06:28.714715  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:06:28.738949  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:06:28.758519  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:06:28.776566  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:06:28.794882  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:06:28.812343  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:06:28.829774  311037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:06:28.843341  311037 ssh_runner.go:195] Run: openssl version
	I1216 05:06:28.849472  311037 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.856988  311037 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:06:28.864221  311037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.867909  311037 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.867975  311037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.902512  311037 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:28.910515  311037 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:28.917541  311037 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.924389  311037 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:06:28.931188  311037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.934791  311037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.934838  311037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.972087  311037 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:06:28.980126  311037 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:06:28.987849  311037 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:06:28.995182  311037 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:06:29.002642  311037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:06:29.006377  311037 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:06:29.006430  311037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:06:29.044098  311037 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:06:29.053355  311037 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:06:29.061363  311037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:06:29.064866  311037 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:06:29.064916  311037 kubeadm.go:401] StartCluster: {Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:29.064995  311037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:06:29.065050  311037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:06:29.095705  311037 cri.go:89] found id: ""
	I1216 05:06:29.095768  311037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:06:29.105306  311037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:29.113972  311037 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:29.114067  311037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:29.121992  311037 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:29.122094  311037 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:29.122150  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:06:29.130831  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:29.130892  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:29.138589  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:06:29.147365  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:29.147412  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:29.155659  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:06:29.163613  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:29.163668  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:29.171124  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:06:29.179344  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:29.179393  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:29.187335  311037 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:29.234049  311037 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 05:06:29.234122  311037 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:06:29.265387  311037 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:06:29.265482  311037 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:06:29.265566  311037 kubeadm.go:319] OS: Linux
	I1216 05:06:29.265643  311037 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:06:29.265708  311037 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:06:29.265770  311037 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:06:29.265837  311037 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:06:29.265934  311037 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:06:29.266060  311037 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:06:29.266134  311037 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:06:29.266210  311037 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:06:29.346130  311037 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:06:29.346342  311037 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:06:29.346501  311037 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:06:29.362538  311037 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:06:29.109060  306433 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 05:06:29.113727  306433 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 05:06:29.113748  306433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 05:06:29.126936  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 05:06:29.403145  306433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:06:29.403317  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:29.403419  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-548714 minikube.k8s.io/updated_at=2025_12_16T05_06_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kindnet-548714 minikube.k8s.io/primary=true
	I1216 05:06:29.415984  306433 ops.go:34] apiserver oom_adj: -16
	I1216 05:06:29.508360  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:30.008491  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:29.365566  311037 out.go:252]   - Generating certificates and keys ...
	I1216 05:06:29.365670  311037 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:06:29.365776  311037 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:06:29.759942  311037 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:06:30.402951  311037 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:06:30.781719  311037 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:06:30.944764  311037 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:06:31.045678  311037 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:06:31.045912  311037 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-548714 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:06:31.405068  311037 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:06:31.405258  311037 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-548714 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:06:31.613446  311037 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	
	
	==> CRI-O <==
	Dec 16 05:05:49 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:05:49.886006175Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:05:49 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:05:49.890086417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:05:49 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:05:49.890111378Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.045866394Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f6a40e70-66b6-484d-a88a-c87d81bc3e39 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.049328432Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8cd5b7a6-efe6-4352-b417-c2c88c7a6d18 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.052819614Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper" id=661ce33b-f483-4808-99f8-a2c5f4d2d3ff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.052982563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.061480242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.062120284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.087709672Z" level=info msg="Created container 239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper" id=661ce33b-f483-4808-99f8-a2c5f4d2d3ff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.08850349Z" level=info msg="Starting container: 239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe" id=1dcae4a5-487a-49ab-a4e5-e156c47903ab name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.090281112Z" level=info msg="Started container" PID=1775 containerID=239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper id=1dcae4a5-487a-49ab-a4e5-e156c47903ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e1af71ba078b88311b22517d4123cdfb8e79bd1bb51f536b9bd92a59bea4afe
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.16213144Z" level=info msg="Removing container: 9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee" id=7c485dc5-84c7-4fc7-85f4-9d75f0756b18 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.177933893Z" level=info msg="Removed container 9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper" id=7c485dc5-84c7-4fc7-85f4-9d75f0756b18 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.18545241Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=81cd8321-e492-4b01-bab1-768bd5e4f10b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.186426026Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fbcd978-a887-4762-9c51-ba7e93d68363 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.188492896Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=631cbaa3-46bf-4739-b982-1dc338979b91 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.188633879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198186581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198372311Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f7df6ecc2f1bd19d1e29a6525104fe10eca18d502143edefbf48a84c7de9e03c/merged/etc/passwd: no such file or directory"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198403476Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f7df6ecc2f1bd19d1e29a6525104fe10eca18d502143edefbf48a84c7de9e03c/merged/etc/group: no such file or directory"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198710191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.281540218Z" level=info msg="Created container aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1: kube-system/storage-provisioner/storage-provisioner" id=631cbaa3-46bf-4739-b982-1dc338979b91 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.283196611Z" level=info msg="Starting container: aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1" id=77470a2b-462d-427a-bc9e-5a1a78dc92b8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.286715409Z" level=info msg="Started container" PID=1789 containerID=aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1 description=kube-system/storage-provisioner/storage-provisioner id=77470a2b-462d-427a-bc9e-5a1a78dc92b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8aaaa33b3fab90063d00571412e5932f5e68f99732f780e56bc06ebbfdc8ff05
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	aafdcce5933f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   8aaaa33b3fab9       storage-provisioner                                    kube-system
	239478b010936       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   2e1af71ba078b       dashboard-metrics-scraper-6ffb444bf9-gsxcg             kubernetes-dashboard
	227abf9a6095b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   b021e84e7b30e       kubernetes-dashboard-855c9754f9-bbfmr                  kubernetes-dashboard
	676776061bc68       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   b0e033aa7db48       coredns-66bc5c9577-65x5z                               kube-system
	74770f4787beb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   a32a6bf7b9285       busybox                                                default
	fa05464129c4c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   0326bd38f80c1       kindnet-xml94                                          kube-system
	d7bcd1550ff49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   8aaaa33b3fab9       storage-provisioner                                    kube-system
	e04d861b0da30       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   5e6147af5af81       kube-proxy-dmw9t                                       kube-system
	3643b4b2bf495       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   c9c38a97a0748       kube-apiserver-default-k8s-diff-port-375252            kube-system
	fb5bb0a8d03a4       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   fa4ca6bbbc69c       kube-scheduler-default-k8s-diff-port-375252            kube-system
	bdd567a96f2d0       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   3ea36d2405bdf       kube-controller-manager-default-k8s-diff-port-375252   kube-system
	5031357bebf46       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   2f2bdddc2b036       etcd-default-k8s-diff-port-375252                      kube-system
	
	
	==> coredns [676776061bc68b44dab6f7b1af084b1b49c0ccdbceaf6cf33a84c892af410ff0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50395 - 11661 "HINFO IN 6607428354816222733.9146165306431085929. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045599452s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-375252
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-375252
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=default-k8s-diff-port-375252
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:04:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-375252
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:06:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-375252
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                226f3bd9-396d-4746-a6f5-231ce56a4f47
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-65x5z                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-375252                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-xml94                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-375252             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-375252    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-dmw9t                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-375252             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gsxcg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bbfmr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-375252 event: Registered Node default-k8s-diff-port-375252 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-375252 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-375252 event: Registered Node default-k8s-diff-port-375252 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01] <==
	{"level":"warn","ts":"2025-12-16T05:05:37.563378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.569594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.575931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.591234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.597417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.603833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.610854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.617972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.624457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.632135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.639273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.645621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.654272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.661819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.668394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.680438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.687405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.694304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.749313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42520","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T05:06:09.500117Z","caller":"traceutil/trace.go:172","msg":"trace[1333349480] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"111.60101ms","start":"2025-12-16T05:06:09.388496Z","end":"2025-12-16T05:06:09.500098Z","steps":["trace[1333349480] 'process raft request'  (duration: 111.439888ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.005882Z","caller":"traceutil/trace.go:172","msg":"trace[1494257111] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"153.294419ms","start":"2025-12-16T05:06:09.852571Z","end":"2025-12-16T05:06:10.005865Z","steps":["trace[1494257111] 'process raft request'  (duration: 153.170315ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:21.389091Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.8556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:06:21.389205Z","caller":"traceutil/trace.go:172","msg":"trace[2084687672] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:627; }","duration":"196.013758ms","start":"2025-12-16T05:06:21.193173Z","end":"2025-12-16T05:06:21.389187Z","steps":["trace[2084687672] 'range keys from in-memory index tree'  (duration: 195.781042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:21.389093Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.36499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:06:21.389400Z","caller":"traceutil/trace.go:172","msg":"trace[907062882] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:627; }","duration":"115.706906ms","start":"2025-12-16T05:06:21.273672Z","end":"2025-12-16T05:06:21.389379Z","steps":["trace[907062882] 'range keys from in-memory index tree'  (duration: 115.300269ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:06:33 up 49 min,  0 user,  load average: 5.80, 3.52, 2.26
	Linux default-k8s-diff-port-375252 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fa05464129c4cd5d2065744b02dba6d46de76d42ffb639148532972ad4ace1c8] <==
	I1216 05:05:39.600358       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:05:39.600619       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 05:05:39.600782       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:05:39.600799       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:05:39.600820       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:05:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:05:39.896369       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:05:39.896472       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:05:39.896517       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:05:39.896701       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:05:40.297080       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:05:40.297116       1 metrics.go:72] Registering metrics
	I1216 05:05:40.297226       1 controller.go:711] "Syncing nftables rules"
	I1216 05:05:49.869605       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:05:49.869652       1 main.go:301] handling current node
	I1216 05:05:59.876122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:05:59.876162       1 main.go:301] handling current node
	I1216 05:06:09.868742       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:06:09.868775       1 main.go:301] handling current node
	I1216 05:06:19.872112       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:06:19.872165       1 main.go:301] handling current node
	I1216 05:06:29.878104       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:06:29.878141       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45] <==
	I1216 05:05:38.232175       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:05:38.232868       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 05:05:38.232887       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 05:05:38.233063       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:05:38.233253       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:05:38.233388       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 05:05:38.233460       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 05:05:38.233712       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 05:05:38.233787       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:05:38.243928       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1216 05:05:38.248061       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:05:38.258137       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:38.295887       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:05:38.310959       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 05:05:38.557297       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:05:38.584425       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:05:38.602007       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:05:38.608967       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:05:38.616685       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:05:38.649826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.129.30"}
	I1216 05:05:38.662404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.57.111"}
	I1216 05:05:39.135747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:05:41.998870       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:05:42.147414       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:05:42.198816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff] <==
	I1216 05:05:41.562385       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 05:05:41.575659       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:05:41.575722       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:05:41.575774       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:05:41.575780       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:05:41.575785       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:05:41.576840       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 05:05:41.579046       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 05:05:41.594491       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 05:05:41.594525       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:05:41.594550       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 05:05:41.594675       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 05:05:41.594790       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 05:05:41.595050       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 05:05:41.595699       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 05:05:41.595725       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1216 05:05:41.599990       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 05:05:41.600056       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:05:41.601176       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 05:05:41.603517       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 05:05:41.605854       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 05:05:41.605916       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 05:05:41.605987       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-375252"
	I1216 05:05:41.606071       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 05:05:41.617934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e04d861b0da300bb613395b582a587069a355bde3fc0afcfb3f515297db72226] <==
	I1216 05:05:39.481550       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:05:39.556597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:05:39.657692       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:05:39.658043       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 05:05:39.658322       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:05:39.676686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:05:39.676736       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:05:39.681575       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:05:39.681973       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:05:39.682048       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:39.683241       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:05:39.683263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:05:39.683275       1 config.go:200] "Starting service config controller"
	I1216 05:05:39.683289       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:05:39.683301       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:05:39.683307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:05:39.683321       1 config.go:309] "Starting node config controller"
	I1216 05:05:39.683326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:05:39.683332       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:05:39.783483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:05:39.783488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:05:39.783496       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3] <==
	I1216 05:05:36.483555       1 serving.go:386] Generated self-signed cert in-memory
	W1216 05:05:38.172112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:05:38.172164       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:05:38.172176       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:05:38.172185       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:05:38.212957       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:05:38.213101       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:38.216641       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:38.216782       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:38.217089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:05:38.217218       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:05:38.317879       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 05:05:42 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:42.222428     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshhg\" (UniqueName: \"kubernetes.io/projected/ce088b60-6c16-42c0-8eb6-514021eb3e34-kube-api-access-dshhg\") pod \"kubernetes-dashboard-855c9754f9-bbfmr\" (UID: \"ce088b60-6c16-42c0-8eb6-514021eb3e34\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bbfmr"
	Dec 16 05:05:45 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:45.872861     736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 16 05:05:46 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:46.136932     736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bbfmr" podStartSLOduration=0.943861304 podStartE2EDuration="4.136907827s" podCreationTimestamp="2025-12-16 05:05:42 +0000 UTC" firstStartedPulling="2025-12-16 05:05:42.456613329 +0000 UTC m=+7.500407235" lastFinishedPulling="2025-12-16 05:05:45.649659862 +0000 UTC m=+10.693453758" observedRunningTime="2025-12-16 05:05:46.1364839 +0000 UTC m=+11.180277815" watchObservedRunningTime="2025-12-16 05:05:46.136907827 +0000 UTC m=+11.180701741"
	Dec 16 05:05:48 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:48.115633     736 scope.go:117] "RemoveContainer" containerID="d9dc8cf62cd789c1ebe249a4423bc4186f245e326e0e3d374818527474bfb896"
	Dec 16 05:05:49 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:49.122672     736 scope.go:117] "RemoveContainer" containerID="d9dc8cf62cd789c1ebe249a4423bc4186f245e326e0e3d374818527474bfb896"
	Dec 16 05:05:49 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:49.123604     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:05:49 default-k8s-diff-port-375252 kubelet[736]: E1216 05:05:49.123848     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:05:50 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:50.126329     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:05:50 default-k8s-diff-port-375252 kubelet[736]: E1216 05:05:50.126490     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:05:51 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:51.128788     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:05:51 default-k8s-diff-port-375252 kubelet[736]: E1216 05:05:51.128960     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:03.045279     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:03.160826     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:03.161166     736 scope.go:117] "RemoveContainer" containerID="239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: E1216 05:06:03.161362     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:09 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:09.756559     736 scope.go:117] "RemoveContainer" containerID="239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	Dec 16 05:06:09 default-k8s-diff-port-375252 kubelet[736]: E1216 05:06:09.756814     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:10 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:10.184988     736 scope.go:117] "RemoveContainer" containerID="d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb"
	Dec 16 05:06:21 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:21.044230     736 scope.go:117] "RemoveContainer" containerID="239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	Dec 16 05:06:21 default-k8s-diff-port-375252 kubelet[736]: E1216 05:06:21.044425     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:06:30 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:30.011296     736 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: kubelet.service: Consumed 1.799s CPU time.
	
	
	==> kubernetes-dashboard [227abf9a6095bd5d2d1170608d9021974081bc4722607f7541d1e12fb67fe96c] <==
	2025/12/16 05:05:45 Using namespace: kubernetes-dashboard
	2025/12/16 05:05:45 Using in-cluster config to connect to apiserver
	2025/12/16 05:05:45 Using secret token for csrf signing
	2025/12/16 05:05:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:05:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:05:45 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 05:05:45 Generating JWE encryption key
	2025/12/16 05:05:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:05:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:05:45 Initializing JWE encryption key from synchronized object
	2025/12/16 05:05:45 Creating in-cluster Sidecar client
	2025/12/16 05:05:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:45 Serving insecurely on HTTP port: 9090
	2025/12/16 05:06:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:45 Starting overwatch
	
	
	==> storage-provisioner [aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1] <==
	I1216 05:06:10.626549       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:06:10.636241       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:06:10.636295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:06:10.638658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:14.094929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:18.355249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:21.953605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:25.007858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:28.031065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:28.034963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:06:28.035153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:06:28.035296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1dfb198-3305-4c9d-9019-8ec75faa390c", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-375252_92472f9f-936d-41d5-831b-069edeb14337 became leader
	I1216 05:06:28.035338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-375252_92472f9f-936d-41d5-831b-069edeb14337!
	W1216 05:06:28.037966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:28.041258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:06:28.135643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-375252_92472f9f-936d-41d5-831b-069edeb14337!
	W1216 05:06:30.046147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:30.051107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:32.055436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:32.062716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb] <==
	I1216 05:05:39.444621       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:06:09.446715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252: exit status 2 (358.495159ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-375252 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-375252
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-375252:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4",
	        "Created": "2025-12-16T05:04:21.317560531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293688,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:05:28.868892712Z",
	            "FinishedAt": "2025-12-16T05:05:26.017939664Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/hosts",
	        "LogPath": "/var/lib/docker/containers/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4/f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4-json.log",
	        "Name": "/default-k8s-diff-port-375252",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-375252:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-375252",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7855ebd974c17f118856f0d63b6145f90ff72ecde74470d5eb7e5c56a9955b4",
	                "LowerDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e-init/diff:/var/lib/docker/overlay2/c594fac72b983107e20cd349e044362db9cb338871c74a3f2a94ce382fc6c5b3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b52957274f9c2b47b7ab43e0959f652505b2c1d70d57171fda18a43c4b0aa91e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-375252",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-375252/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-375252",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-375252",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-375252",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c0434c79d016bebb8278e0c7422e8854fd5240240d0ce2fafb4d3f5551fbfa11",
	            "SandboxKey": "/var/run/docker/netns/c0434c79d016",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-375252": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b4cd001204df0b61e89eebe6a12657aa97766dbbe6ee158bf1c23f85315b1b70",
	                    "EndpointID": "96351fa7711459eae73f4b9ac5e4eb9bea44b2322848dccc5aea78ebc2e4c080",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:26:0c:8d:5f:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-375252",
	                        "f7855ebd974c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252: exit status 2 (362.199731ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-375252 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-375252 logs -n 25: (1.333916333s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-545075                                                                                                                                                                                                                            │ old-k8s-version-545075       │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:06 UTC │
	│ image   │ no-preload-990631 image list --format=json                                                                                                                                                                                                           │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p no-preload-990631 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-418580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ stop    │ -p newest-cni-418580 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-418580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:06 UTC │
	│ image   │ embed-certs-127599 image list --format=json                                                                                                                                                                                                          │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ pause   │ -p embed-certs-127599 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ delete  │ -p no-preload-990631                                                                                                                                                                                                                                 │ no-preload-990631            │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │ 16 Dec 25 05:05 UTC │
	│ start   │ -p auto-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-548714                  │ jenkins │ v1.37.0 │ 16 Dec 25 05:05 UTC │                     │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ delete  │ -p embed-certs-127599                                                                                                                                                                                                                                │ embed-certs-127599           │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ start   │ -p kindnet-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-548714               │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ image   │ newest-cni-418580 image list --format=json                                                                                                                                                                                                           │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ pause   │ -p newest-cni-418580 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ delete  │ -p newest-cni-418580                                                                                                                                                                                                                                 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ delete  │ -p newest-cni-418580                                                                                                                                                                                                                                 │ newest-cni-418580            │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ start   │ -p calico-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                               │ calico-548714                │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	│ image   │ default-k8s-diff-port-375252 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │ 16 Dec 25 05:06 UTC │
	│ pause   │ -p default-k8s-diff-port-375252 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-375252 │ jenkins │ v1.37.0 │ 16 Dec 25 05:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:06:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:06:16.734529  311037 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:06:16.734644  311037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:16.734654  311037 out.go:374] Setting ErrFile to fd 2...
	I1216 05:06:16.734661  311037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:06:16.734977  311037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:06:16.735504  311037 out.go:368] Setting JSON to false
	I1216 05:06:16.736755  311037 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2928,"bootTime":1765858649,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:06:16.736807  311037 start.go:143] virtualization: kvm guest
	I1216 05:06:16.738874  311037 out.go:179] * [calico-548714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:06:16.740144  311037 notify.go:221] Checking for updates...
	I1216 05:06:16.740281  311037 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:06:16.741579  311037 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:06:16.742859  311037 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:06:16.744169  311037 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:06:16.748495  311037 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:06:16.749660  311037 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:06:16.751399  311037 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:16.751528  311037 config.go:182] Loaded profile config "default-k8s-diff-port-375252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:16.751616  311037 config.go:182] Loaded profile config "kindnet-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:16.751695  311037 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:06:16.774944  311037 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:06:16.775059  311037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:16.835259  311037 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:16.823973244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:16.835379  311037 docker.go:319] overlay module found
	I1216 05:06:16.837933  311037 out.go:179] * Using the docker driver based on user configuration
	I1216 05:06:16.839115  311037 start.go:309] selected driver: docker
	I1216 05:06:16.839129  311037 start.go:927] validating driver "docker" against <nil>
	I1216 05:06:16.839139  311037 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:06:16.839652  311037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:06:16.909172  311037 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-16 05:06:16.892445715 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:06:16.909326  311037 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:06:16.909634  311037 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:06:16.911337  311037 out.go:179] * Using Docker driver with root privileges
	I1216 05:06:16.912602  311037 cni.go:84] Creating CNI manager for "calico"
	I1216 05:06:16.912619  311037 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1216 05:06:16.912684  311037 start.go:353] cluster config:
	{Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:16.914286  311037 out.go:179] * Starting "calico-548714" primary control-plane node in "calico-548714" cluster
	I1216 05:06:16.915336  311037 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:06:16.916400  311037 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:06:16.917468  311037 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:16.917505  311037 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:06:16.917530  311037 cache.go:65] Caching tarball of preloaded images
	I1216 05:06:16.917591  311037 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:06:16.917631  311037 preload.go:238] Found /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:06:16.917649  311037 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:06:16.917764  311037 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/config.json ...
	I1216 05:06:16.917790  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/config.json: {Name:mkf46a9345d8db3aaffd3f2e9c58dd5028fcc8df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:16.938878  311037 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:06:16.938894  311037 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:06:16.938913  311037 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:06:16.938952  311037 start.go:360] acquireMachinesLock for calico-548714: {Name:mk0fe94f066db6459e497304b80f7300cf7e5cc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:06:16.939059  311037 start.go:364] duration metric: took 88.339µs to acquireMachinesLock for "calico-548714"
	I1216 05:06:16.939096  311037 start.go:93] Provisioning new machine with config: &{Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:06:16.939178  311037 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:06:15.497041  306433 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:06:15.497170  306433 machine.go:97] duration metric: took 4.200677815s to provisionDockerMachine
	I1216 05:06:15.497216  306433 client.go:176] duration metric: took 9.757109246s to LocalClient.Create
	I1216 05:06:15.497265  306433 start.go:167] duration metric: took 9.757207317s to libmachine.API.Create "kindnet-548714"
	I1216 05:06:15.497321  306433 start.go:293] postStartSetup for "kindnet-548714" (driver="docker")
	I1216 05:06:15.497364  306433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:06:15.497466  306433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:06:15.497530  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.518944  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:15.621008  306433 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:06:15.624866  306433 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:06:15.624899  306433 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:06:15.624912  306433 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:06:15.624966  306433 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:06:15.625082  306433 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:06:15.625246  306433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:06:15.635460  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:15.658133  306433 start.go:296] duration metric: took 160.754363ms for postStartSetup
	I1216 05:06:15.658562  306433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-548714
	I1216 05:06:15.679635  306433 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/config.json ...
	I1216 05:06:15.679955  306433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:06:15.680008  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.704489  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:15.809970  306433 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:06:15.816138  306433 start.go:128] duration metric: took 10.078151931s to createHost
	I1216 05:06:15.816163  306433 start.go:83] releasing machines lock for "kindnet-548714", held for 10.078288306s
	I1216 05:06:15.816224  306433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-548714
	I1216 05:06:15.837056  306433 ssh_runner.go:195] Run: cat /version.json
	I1216 05:06:15.837102  306433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:06:15.837118  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.837174  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:15.864111  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:15.866757  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:16.040009  306433 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:16.046714  306433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:06:16.106078  306433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:06:16.113847  306433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:06:16.114065  306433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:06:16.157848  306433 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:06:16.157874  306433 start.go:496] detecting cgroup driver to use...
	I1216 05:06:16.157910  306433 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:06:16.157959  306433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:06:16.177412  306433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:06:16.193694  306433 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:06:16.193763  306433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:06:16.229481  306433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:06:16.268280  306433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:06:16.368564  306433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:06:16.492701  306433 docker.go:234] disabling docker service ...
	I1216 05:06:16.492819  306433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:06:16.520731  306433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:06:16.537336  306433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:06:16.639769  306433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:06:16.736158  306433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:06:16.749853  306433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:06:16.764897  306433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:06:16.764960  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.776640  306433 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:06:16.776700  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.785807  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.797915  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.808615  306433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:06:16.817330  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.826109  306433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.841445  306433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:16.851047  306433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:06:16.859445  306433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:06:16.867716  306433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:16.953613  306433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:06:17.104980  306433 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:06:17.105058  306433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:06:17.109011  306433 start.go:564] Will wait 60s for crictl version
	I1216 05:06:17.109131  306433 ssh_runner.go:195] Run: which crictl
	I1216 05:06:17.113169  306433 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:06:17.146413  306433 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:06:17.146493  306433 ssh_runner.go:195] Run: crio --version
	I1216 05:06:17.177083  306433 ssh_runner.go:195] Run: crio --version
	I1216 05:06:17.216574  306433 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1216 05:06:14.821776  293223 pod_ready.go:104] pod "coredns-66bc5c9577-65x5z" is not "Ready", error: <nil>
	I1216 05:06:16.321706  293223 pod_ready.go:94] pod "coredns-66bc5c9577-65x5z" is "Ready"
	I1216 05:06:16.321734  293223 pod_ready.go:86] duration metric: took 36.506354154s for pod "coredns-66bc5c9577-65x5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.324422  293223 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.328510  293223 pod_ready.go:94] pod "etcd-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:16.328533  293223 pod_ready.go:86] duration metric: took 4.088428ms for pod "etcd-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.330541  293223 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.334327  293223 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:16.334347  293223 pod_ready.go:86] duration metric: took 3.786455ms for pod "kube-apiserver-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.336390  293223 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.519906  293223 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:16.519941  293223 pod_ready.go:86] duration metric: took 183.531316ms for pod "kube-controller-manager-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:16.719181  293223 pod_ready.go:83] waiting for pod "kube-proxy-dmw9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.119695  293223 pod_ready.go:94] pod "kube-proxy-dmw9t" is "Ready"
	I1216 05:06:17.119726  293223 pod_ready.go:86] duration metric: took 400.513863ms for pod "kube-proxy-dmw9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.319881  293223 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.719687  293223 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-375252" is "Ready"
	I1216 05:06:17.719712  293223 pod_ready.go:86] duration metric: took 399.802458ms for pod "kube-scheduler-default-k8s-diff-port-375252" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:06:17.719727  293223 pod_ready.go:40] duration metric: took 37.907937353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:06:17.783977  293223 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:06:17.787906  293223 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-375252" cluster and "default" namespace by default
	I1216 05:06:15.182180  302822 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.687219927s
	I1216 05:06:16.667772  302822 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.172825206s
	I1216 05:06:17.996485  302822 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502174664s
	I1216 05:06:18.016000  302822 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 05:06:18.028192  302822 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 05:06:18.038577  302822 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 05:06:18.038906  302822 kubeadm.go:319] [mark-control-plane] Marking the node auto-548714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 05:06:18.047842  302822 kubeadm.go:319] [bootstrap-token] Using token: c919d0.lrukntq29vxntbkj
	I1216 05:06:18.049600  302822 out.go:252]   - Configuring RBAC rules ...
	I1216 05:06:18.049740  302822 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 05:06:18.052707  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 05:06:18.058416  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 05:06:18.060948  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 05:06:18.064603  302822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 05:06:18.067795  302822 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 05:06:18.404066  302822 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 05:06:18.821113  302822 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 05:06:19.404085  302822 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 05:06:19.405296  302822 kubeadm.go:319] 
	I1216 05:06:19.405370  302822 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 05:06:19.405383  302822 kubeadm.go:319] 
	I1216 05:06:19.405471  302822 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 05:06:19.405482  302822 kubeadm.go:319] 
	I1216 05:06:19.405507  302822 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 05:06:19.405639  302822 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 05:06:19.405726  302822 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 05:06:19.405740  302822 kubeadm.go:319] 
	I1216 05:06:19.405786  302822 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 05:06:19.405793  302822 kubeadm.go:319] 
	I1216 05:06:19.405833  302822 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 05:06:19.405840  302822 kubeadm.go:319] 
	I1216 05:06:19.405905  302822 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 05:06:19.405973  302822 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 05:06:19.406064  302822 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 05:06:19.406075  302822 kubeadm.go:319] 
	I1216 05:06:19.406199  302822 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 05:06:19.406324  302822 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 05:06:19.406336  302822 kubeadm.go:319] 
	I1216 05:06:19.406458  302822 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c919d0.lrukntq29vxntbkj \
	I1216 05:06:19.406682  302822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 05:06:19.406722  302822 kubeadm.go:319] 	--control-plane 
	I1216 05:06:19.406746  302822 kubeadm.go:319] 
	I1216 05:06:19.406862  302822 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 05:06:19.406873  302822 kubeadm.go:319] 
	I1216 05:06:19.406984  302822 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c919d0.lrukntq29vxntbkj \
	I1216 05:06:19.407158  302822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 05:06:19.410660  302822 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:06:19.410825  302822 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:06:19.410877  302822 cni.go:84] Creating CNI manager for ""
	I1216 05:06:19.410900  302822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:06:19.417866  302822 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 05:06:17.217863  306433 cli_runner.go:164] Run: docker network inspect kindnet-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:17.242809  306433 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1216 05:06:17.247892  306433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:17.260512  306433 kubeadm.go:884] updating cluster {Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:06:17.260654  306433 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:17.260715  306433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:17.301982  306433 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:17.302006  306433 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:06:17.302072  306433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:17.334803  306433 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:17.334823  306433 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:06:17.334831  306433 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1216 05:06:17.334914  306433 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-548714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 05:06:17.335066  306433 ssh_runner.go:195] Run: crio config
	I1216 05:06:17.401567  306433 cni.go:84] Creating CNI manager for "kindnet"
	I1216 05:06:17.401604  306433 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:06:17.401635  306433 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-548714 NodeName:kindnet-548714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:06:17.401828  306433 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-548714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:06:17.401909  306433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:06:17.413779  306433 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:06:17.413870  306433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:06:17.423610  306433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1216 05:06:17.438889  306433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:06:17.456025  306433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 05:06:17.497009  306433 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:06:17.501749  306433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:17.519229  306433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:17.628202  306433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:17.653778  306433 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714 for IP: 192.168.94.2
	I1216 05:06:17.653801  306433 certs.go:195] generating shared ca certs ...
	I1216 05:06:17.653821  306433 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.654153  306433 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:06:17.654247  306433 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:06:17.654268  306433 certs.go:257] generating profile certs ...
	I1216 05:06:17.654346  306433 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.key
	I1216 05:06:17.654462  306433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.crt with IP's: []
	I1216 05:06:17.764008  306433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.crt ...
	I1216 05:06:17.764084  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.crt: {Name:mke6944cd5e230ef5b9c0df0e1fbf4ee9f9f453c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.764282  306433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.key ...
	I1216 05:06:17.764294  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/client.key: {Name:mk8d4989a112cae82c25474cfa910e701c396ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.764403  306433 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897
	I1216 05:06:17.764473  306433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1216 05:06:17.938097  306433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897 ...
	I1216 05:06:17.938134  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897: {Name:mk5ff47cdfc0871e37bce8ce81b7bea99e8e8f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.938349  306433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897 ...
	I1216 05:06:17.938375  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897: {Name:mk8db4b77e31ec1c99c7a14549292caf6d8852d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:17.938525  306433 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt.dea5b897 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt
	I1216 05:06:17.938659  306433 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key.dea5b897 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key
	I1216 05:06:17.938758  306433 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key
	I1216 05:06:17.938779  306433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt with IP's: []
	I1216 05:06:18.065161  306433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt ...
	I1216 05:06:18.065192  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt: {Name:mk7f853cc48a13763c5bd43f9210b1937965d312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:18.065392  306433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key ...
	I1216 05:06:18.065413  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key: {Name:mk34f0bd0856b976828b25c35600a353c132b27d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:18.065682  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:06:18.065732  306433 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:06:18.065746  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:06:18.065794  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:06:18.065827  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:06:18.065861  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:06:18.065924  306433 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:18.066740  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:06:18.088319  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:06:18.114943  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:06:18.138725  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:06:18.158609  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:06:18.178437  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:06:18.199468  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:06:18.218734  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kindnet-548714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:06:18.240736  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:06:18.262830  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:06:18.286555  306433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:06:18.308449  306433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:06:18.322890  306433 ssh_runner.go:195] Run: openssl version
	I1216 05:06:18.328941  306433 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.336339  306433 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:06:18.343938  306433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.347550  306433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.347615  306433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:18.383935  306433 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:06:18.392410  306433 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:06:18.400706  306433 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.410124  306433 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:06:18.419200  306433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.423776  306433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.423846  306433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:06:18.476667  306433 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:06:18.486508  306433 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:06:18.494575  306433 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.503452  306433 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:06:18.513209  306433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.517897  306433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.517962  306433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:06:18.563791  306433 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:18.574433  306433 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:18.583769  306433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:06:18.587910  306433 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:06:18.587995  306433 kubeadm.go:401] StartCluster: {Name:kindnet-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:18.588108  306433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:06:18.588183  306433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:06:18.620557  306433 cri.go:89] found id: ""
	I1216 05:06:18.620636  306433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:06:18.630816  306433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:18.640929  306433 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:18.640996  306433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:18.654637  306433 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:18.654664  306433 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:18.654717  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:06:18.667284  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:18.667613  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:18.677865  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:06:18.688365  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:18.688428  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:18.696184  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:06:18.704206  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:18.704273  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:18.711841  306433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:06:18.719852  306433 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:18.719916  306433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:18.728595  306433 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:18.798195  306433 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1216 05:06:18.866578  306433 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:06:16.941590  311037 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:06:16.941833  311037 start.go:159] libmachine.API.Create for "calico-548714" (driver="docker")
	I1216 05:06:16.941872  311037 client.go:173] LocalClient.Create starting
	I1216 05:06:16.941956  311037 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem
	I1216 05:06:16.941994  311037 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:16.942039  311037 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:16.942119  311037 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem
	I1216 05:06:16.942147  311037 main.go:143] libmachine: Decoding PEM data...
	I1216 05:06:16.942184  311037 main.go:143] libmachine: Parsing certificate...
	I1216 05:06:16.942614  311037 cli_runner.go:164] Run: docker network inspect calico-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:06:16.959740  311037 cli_runner.go:211] docker network inspect calico-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:06:16.959838  311037 network_create.go:284] running [docker network inspect calico-548714] to gather additional debugging logs...
	I1216 05:06:16.959870  311037 cli_runner.go:164] Run: docker network inspect calico-548714
	W1216 05:06:16.978280  311037 cli_runner.go:211] docker network inspect calico-548714 returned with exit code 1
	I1216 05:06:16.978307  311037 network_create.go:287] error running [docker network inspect calico-548714]: docker network inspect calico-548714: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-548714 not found
	I1216 05:06:16.978323  311037 network_create.go:289] output of [docker network inspect calico-548714]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-548714 not found
	
	** /stderr **
	I1216 05:06:16.978435  311037 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:16.995573  311037 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
	I1216 05:06:16.996483  311037 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1228b9e11de3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:57:d2:64:54:d7} reservation:<nil>}
	I1216 05:06:16.997211  311037 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0398afc2111 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:cf:7d:1b:2a:e0} reservation:<nil>}
	I1216 05:06:16.997727  311037 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4cd001204df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:4c:f5:dc:0a:90} reservation:<nil>}
	I1216 05:06:16.998540  311037 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f80fe0}
	I1216 05:06:16.998564  311037 network_create.go:124] attempt to create docker network calico-548714 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 05:06:16.998621  311037 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-548714 calico-548714
	I1216 05:06:17.050112  311037 network_create.go:108] docker network calico-548714 192.168.85.0/24 created
	I1216 05:06:17.050149  311037 kic.go:121] calculated static IP "192.168.85.2" for the "calico-548714" container
	I1216 05:06:17.050245  311037 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:06:17.068873  311037 cli_runner.go:164] Run: docker volume create calico-548714 --label name.minikube.sigs.k8s.io=calico-548714 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:06:17.086743  311037 oci.go:103] Successfully created a docker volume calico-548714
	I1216 05:06:17.086832  311037 cli_runner.go:164] Run: docker run --rm --name calico-548714-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-548714 --entrypoint /usr/bin/test -v calico-548714:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 05:06:17.543293  311037 oci.go:107] Successfully prepared a docker volume calico-548714
	I1216 05:06:17.543367  311037 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:17.543383  311037 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:06:17.543471  311037 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-548714:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:06:19.419289  302822 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 05:06:19.424959  302822 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 05:06:19.424983  302822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 05:06:19.438654  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 05:06:19.683312  302822 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:06:19.683465  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:19.683527  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-548714 minikube.k8s.io/updated_at=2025_12_16T05_06_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=auto-548714 minikube.k8s.io/primary=true
	I1216 05:06:19.697611  302822 ops.go:34] apiserver oom_adj: -16
	I1216 05:06:19.780117  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:20.280173  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:20.780155  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:21.280961  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:21.780143  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:22.280141  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:22.780159  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:23.280162  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:23.780167  302822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:23.865905  302822 kubeadm.go:1114] duration metric: took 4.182483113s to wait for elevateKubeSystemPrivileges
	I1216 05:06:23.865948  302822 kubeadm.go:403] duration metric: took 15.931077528s to StartCluster
	I1216 05:06:23.865968  302822 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:23.866060  302822 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:06:23.867370  302822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:23.867632  302822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:06:23.867648  302822 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:06:23.867629  302822 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:06:23.867722  302822 addons.go:70] Setting storage-provisioner=true in profile "auto-548714"
	I1216 05:06:23.867736  302822 addons.go:239] Setting addon storage-provisioner=true in "auto-548714"
	I1216 05:06:23.867755  302822 addons.go:70] Setting default-storageclass=true in profile "auto-548714"
	I1216 05:06:23.867871  302822 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-548714"
	I1216 05:06:23.867951  302822 config.go:182] Loaded profile config "auto-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:23.867759  302822 host.go:66] Checking if "auto-548714" exists ...
	I1216 05:06:23.868262  302822 cli_runner.go:164] Run: docker container inspect auto-548714 --format={{.State.Status}}
	I1216 05:06:23.868582  302822 cli_runner.go:164] Run: docker container inspect auto-548714 --format={{.State.Status}}
	I1216 05:06:23.869152  302822 out.go:179] * Verifying Kubernetes components...
	I1216 05:06:23.873173  302822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:23.896297  302822 addons.go:239] Setting addon default-storageclass=true in "auto-548714"
	I1216 05:06:23.896341  302822 host.go:66] Checking if "auto-548714" exists ...
	I1216 05:06:23.896800  302822 cli_runner.go:164] Run: docker container inspect auto-548714 --format={{.State.Status}}
	I1216 05:06:23.897715  302822 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:06:23.899182  302822 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:06:23.899218  302822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:06:23.899277  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:23.921406  302822 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:06:23.921437  302822 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:06:23.921522  302822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-548714
	I1216 05:06:23.933076  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:23.958959  302822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/auto-548714/id_rsa Username:docker}
	I1216 05:06:23.995816  302822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:06:24.045391  302822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:24.072281  302822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:06:24.090124  302822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:06:24.214132  302822 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1216 05:06:24.215075  302822 node_ready.go:35] waiting up to 15m0s for node "auto-548714" to be "Ready" ...
	I1216 05:06:24.510688  302822 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 05:06:21.778132  311037 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-548714:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.234593118s)
	I1216 05:06:21.778171  311037 kic.go:203] duration metric: took 4.234784476s to extract preloaded images to volume ...
	W1216 05:06:21.778283  311037 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1216 05:06:21.778322  311037 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1216 05:06:21.778368  311037 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 05:06:21.840611  311037 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-548714 --name calico-548714 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-548714 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-548714 --network calico-548714 --ip 192.168.85.2 --volume calico-548714:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 05:06:22.127725  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Running}}
	I1216 05:06:22.147710  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Status}}
	I1216 05:06:22.166077  311037 cli_runner.go:164] Run: docker exec calico-548714 stat /var/lib/dpkg/alternatives/iptables
	I1216 05:06:22.217549  311037 oci.go:144] the created container "calico-548714" has a running status.
	I1216 05:06:22.217581  311037 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa...
	I1216 05:06:22.288828  311037 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 05:06:22.322868  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Status}}
	I1216 05:06:22.346494  311037 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 05:06:22.346517  311037 kic_runner.go:114] Args: [docker exec --privileged calico-548714 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 05:06:22.400805  311037 cli_runner.go:164] Run: docker container inspect calico-548714 --format={{.State.Status}}
	I1216 05:06:22.418403  311037 machine.go:94] provisionDockerMachine start ...
	I1216 05:06:22.418504  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:22.440724  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:22.441599  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:22.441617  311037 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:06:22.442503  311037 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51062->127.0.0.1:33125: read: connection reset by peer
	I1216 05:06:25.575651  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-548714
	
	I1216 05:06:25.575690  311037 ubuntu.go:182] provisioning hostname "calico-548714"
	I1216 05:06:25.575774  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:25.597345  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:25.597658  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:25.597680  311037 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-548714 && echo "calico-548714" | sudo tee /etc/hostname
	I1216 05:06:25.741526  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-548714
	
	I1216 05:06:25.741614  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:25.769474  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:25.769788  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:25.769812  311037 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-548714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-548714/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-548714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:06:25.914142  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:06:25.914170  311037 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5066/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5066/.minikube}
	I1216 05:06:25.914191  311037 ubuntu.go:190] setting up certificates
	I1216 05:06:25.914203  311037 provision.go:84] configureAuth start
	I1216 05:06:25.914264  311037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-548714
	I1216 05:06:25.940676  311037 provision.go:143] copyHostCerts
	I1216 05:06:25.940779  311037 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem, removing ...
	I1216 05:06:25.940799  311037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem
	I1216 05:06:25.940870  311037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/ca.pem (1082 bytes)
	I1216 05:06:25.940987  311037 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem, removing ...
	I1216 05:06:25.941009  311037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem
	I1216 05:06:25.941083  311037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/cert.pem (1123 bytes)
	I1216 05:06:25.941174  311037 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem, removing ...
	I1216 05:06:25.941193  311037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem
	I1216 05:06:25.941221  311037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5066/.minikube/key.pem (1679 bytes)
	I1216 05:06:25.941285  311037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem org=jenkins.calico-548714 san=[127.0.0.1 192.168.85.2 calico-548714 localhost minikube]
	I1216 05:06:26.050252  311037 provision.go:177] copyRemoteCerts
	I1216 05:06:26.050314  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:06:26.050350  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.068759  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.166814  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:06:26.187222  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 05:06:26.205248  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:06:26.222721  311037 provision.go:87] duration metric: took 308.499904ms to configureAuth
	I1216 05:06:26.222747  311037 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:06:26.222932  311037 config.go:182] Loaded profile config "calico-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:26.223071  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.241919  311037 main.go:143] libmachine: Using SSH client type: native
	I1216 05:06:26.242202  311037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1216 05:06:26.242223  311037 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:06:26.523604  311037 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:06:26.523638  311037 machine.go:97] duration metric: took 4.105199142s to provisionDockerMachine
	I1216 05:06:26.523649  311037 client.go:176] duration metric: took 9.581767661s to LocalClient.Create
	I1216 05:06:26.523666  311037 start.go:167] duration metric: took 9.581842767s to libmachine.API.Create "calico-548714"
	I1216 05:06:26.523672  311037 start.go:293] postStartSetup for "calico-548714" (driver="docker")
	I1216 05:06:26.523681  311037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:06:26.523732  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:06:26.523768  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.541901  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.635903  311037 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:06:26.639528  311037 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:06:26.639553  311037 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:06:26.639569  311037 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/addons for local assets ...
	I1216 05:06:26.639610  311037 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5066/.minikube/files for local assets ...
	I1216 05:06:26.639685  311037 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem -> 85922.pem in /etc/ssl/certs
	I1216 05:06:26.639788  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:06:26.647230  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:26.666394  311037 start.go:296] duration metric: took 142.707698ms for postStartSetup
	I1216 05:06:26.666713  311037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-548714
	I1216 05:06:26.684831  311037 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/config.json ...
	I1216 05:06:26.685117  311037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:06:26.685165  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.703305  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.793366  311037 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:06:26.797904  311037 start.go:128] duration metric: took 9.858709s to createHost
	I1216 05:06:26.797929  311037 start.go:83] releasing machines lock for "calico-548714", held for 9.858855061s
	I1216 05:06:26.798041  311037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-548714
	I1216 05:06:26.815421  311037 ssh_runner.go:195] Run: cat /version.json
	I1216 05:06:26.815458  311037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:06:26.815465  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.815534  311037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-548714
	I1216 05:06:26.834742  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.835055  311037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/calico-548714/id_rsa Username:docker}
	I1216 05:06:26.993569  311037 ssh_runner.go:195] Run: systemctl --version
	I1216 05:06:27.001626  311037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:06:27.042085  311037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:06:27.047643  311037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:06:27.047701  311037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:06:27.076449  311037 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:06:27.076479  311037 start.go:496] detecting cgroup driver to use...
	I1216 05:06:27.076519  311037 detect.go:190] detected "systemd" cgroup driver on host os
	I1216 05:06:27.076569  311037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:06:27.095614  311037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:06:27.110816  311037 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:06:27.110875  311037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:06:27.130105  311037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:06:27.151195  311037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:06:27.256209  311037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:06:27.365188  311037 docker.go:234] disabling docker service ...
	I1216 05:06:27.365258  311037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:06:27.386898  311037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:06:27.402046  311037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:06:27.488051  311037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:06:27.575795  311037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:06:27.588361  311037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:06:27.602470  311037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:06:27.602535  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.612651  311037 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1216 05:06:27.612713  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.621140  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.629554  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.637808  311037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:06:27.645741  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.654455  311037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.667273  311037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:06:27.675658  311037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:06:27.683045  311037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:06:27.690868  311037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:27.779556  311037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:06:27.921683  311037 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:06:27.921750  311037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:06:27.925794  311037 start.go:564] Will wait 60s for crictl version
	I1216 05:06:27.925847  311037 ssh_runner.go:195] Run: which crictl
	I1216 05:06:27.930031  311037 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:06:27.961214  311037 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:06:27.961290  311037 ssh_runner.go:195] Run: crio --version
	I1216 05:06:27.989664  311037 ssh_runner.go:195] Run: crio --version
	I1216 05:06:28.020610  311037 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:06:24.512090  302822 addons.go:530] duration metric: took 644.433155ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:06:24.718081  302822 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-548714" context rescaled to 1 replicas
	W1216 05:06:26.218757  302822 node_ready.go:57] node "auto-548714" has "Ready":"False" status (will retry)
	W1216 05:06:28.720204  302822 node_ready.go:57] node "auto-548714" has "Ready":"False" status (will retry)
	I1216 05:06:29.092327  306433 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 05:06:29.092398  306433 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:06:29.092517  306433 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:06:29.092593  306433 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:06:29.092645  306433 kubeadm.go:319] OS: Linux
	I1216 05:06:29.092736  306433 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:06:29.092823  306433 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:06:29.092893  306433 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:06:29.092966  306433 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:06:29.093055  306433 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:06:29.093108  306433 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:06:29.093169  306433 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:06:29.093228  306433 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:06:29.093361  306433 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:06:29.093512  306433 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:06:29.093654  306433 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:06:29.093725  306433 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:06:29.095625  306433 out.go:252]   - Generating certificates and keys ...
	I1216 05:06:29.095718  306433 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:06:29.095801  306433 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:06:29.095877  306433 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:06:29.095994  306433 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:06:29.096118  306433 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:06:29.096196  306433 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:06:29.096351  306433 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:06:29.096537  306433 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-548714 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 05:06:29.096592  306433 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:06:29.096700  306433 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-548714 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 05:06:29.096768  306433 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 05:06:29.096840  306433 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 05:06:29.096888  306433 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 05:06:29.096959  306433 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:06:29.097001  306433 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:06:29.097086  306433 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:06:29.097173  306433 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:06:29.097265  306433 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:06:29.097341  306433 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:06:29.097444  306433 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:06:29.097540  306433 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:06:29.098840  306433 out.go:252]   - Booting up control plane ...
	I1216 05:06:29.098929  306433 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:06:29.099039  306433 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:06:29.099121  306433 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:06:29.099270  306433 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:06:29.099424  306433 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:06:29.099534  306433 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:06:29.099640  306433 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:06:29.099716  306433 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:06:29.099903  306433 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:06:29.100060  306433 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:06:29.100109  306433 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001053555s
	I1216 05:06:29.100196  306433 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 05:06:29.100292  306433 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1216 05:06:29.100424  306433 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 05:06:29.100531  306433 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 05:06:29.100630  306433 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.922817936s
	I1216 05:06:29.100719  306433 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.266581857s
	I1216 05:06:29.100810  306433 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001430761s
	I1216 05:06:29.101001  306433 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 05:06:29.101184  306433 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 05:06:29.101258  306433 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 05:06:29.101464  306433 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-548714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 05:06:29.101533  306433 kubeadm.go:319] [bootstrap-token] Using token: mzm2oc.3ozg1p2oxw32434g
	I1216 05:06:29.102723  306433 out.go:252]   - Configuring RBAC rules ...
	I1216 05:06:29.102840  306433 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 05:06:29.102938  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 05:06:29.103127  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 05:06:29.103322  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 05:06:29.103504  306433 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 05:06:29.103633  306433 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 05:06:29.103794  306433 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 05:06:29.103837  306433 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 05:06:29.103904  306433 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 05:06:29.103910  306433 kubeadm.go:319] 
	I1216 05:06:29.103988  306433 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 05:06:29.103995  306433 kubeadm.go:319] 
	I1216 05:06:29.104095  306433 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 05:06:29.104104  306433 kubeadm.go:319] 
	I1216 05:06:29.104138  306433 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 05:06:29.104211  306433 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 05:06:29.104287  306433 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 05:06:29.104300  306433 kubeadm.go:319] 
	I1216 05:06:29.104389  306433 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 05:06:29.104399  306433 kubeadm.go:319] 
	I1216 05:06:29.104480  306433 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 05:06:29.104493  306433 kubeadm.go:319] 
	I1216 05:06:29.104553  306433 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 05:06:29.104652  306433 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 05:06:29.104745  306433 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 05:06:29.104752  306433 kubeadm.go:319] 
	I1216 05:06:29.104858  306433 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 05:06:29.104954  306433 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 05:06:29.104968  306433 kubeadm.go:319] 
	I1216 05:06:29.105091  306433 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mzm2oc.3ozg1p2oxw32434g \
	I1216 05:06:29.105200  306433 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe \
	I1216 05:06:29.105227  306433 kubeadm.go:319] 	--control-plane 
	I1216 05:06:29.105236  306433 kubeadm.go:319] 
	I1216 05:06:29.105364  306433 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 05:06:29.105385  306433 kubeadm.go:319] 
	I1216 05:06:29.105462  306433 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mzm2oc.3ozg1p2oxw32434g \
	I1216 05:06:29.105561  306433 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b70c239bdefd9c8dc05a33d45325d8bc9e004b487de6e296a05efd2936eda9fe 
	I1216 05:06:29.105570  306433 cni.go:84] Creating CNI manager for "kindnet"
	I1216 05:06:29.107887  306433 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 05:06:28.021806  311037 cli_runner.go:164] Run: docker network inspect calico-548714 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:06:28.043564  311037 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 05:06:28.047842  311037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:28.058029  311037 kubeadm.go:884] updating cluster {Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:06:28.058173  311037 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:06:28.058215  311037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:28.089868  311037 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:28.089888  311037 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:06:28.089944  311037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:06:28.120238  311037 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:06:28.120274  311037 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:06:28.120284  311037 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1216 05:06:28.120392  311037 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-548714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1216 05:06:28.120481  311037 ssh_runner.go:195] Run: crio config
	I1216 05:06:28.171404  311037 cni.go:84] Creating CNI manager for "calico"
	I1216 05:06:28.171434  311037 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:06:28.171472  311037 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-548714 NodeName:calico-548714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:06:28.171631  311037 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-548714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:06:28.171714  311037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:06:28.180143  311037 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:06:28.180206  311037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:06:28.188158  311037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 05:06:28.201802  311037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:06:28.217990  311037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 05:06:28.231559  311037 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:06:28.236465  311037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:06:28.247524  311037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:28.354965  311037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:28.381482  311037 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714 for IP: 192.168.85.2
	I1216 05:06:28.381507  311037 certs.go:195] generating shared ca certs ...
	I1216 05:06:28.381525  311037 certs.go:227] acquiring lock for ca certs: {Name:mkabe3944655d5a8341d9df8cd56db76352a005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.381671  311037 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key
	I1216 05:06:28.381726  311037 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key
	I1216 05:06:28.381741  311037 certs.go:257] generating profile certs ...
	I1216 05:06:28.381795  311037 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.key
	I1216 05:06:28.381811  311037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.crt with IP's: []
	I1216 05:06:28.427750  311037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.crt ...
	I1216 05:06:28.427782  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.crt: {Name:mke1a5169816bd7f9795fdf5df1d0b050bbcd1b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.427953  311037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.key ...
	I1216 05:06:28.427964  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/client.key: {Name:mk8dd9b5942e6166ddefffd1b9b9315328d90f26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.428070  311037 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3
	I1216 05:06:28.428085  311037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1216 05:06:28.555207  311037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3 ...
	I1216 05:06:28.555234  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3: {Name:mk6cabbd9369101497ce2b91312177a7223ca3f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.555407  311037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3 ...
	I1216 05:06:28.555419  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3: {Name:mk63a2876c3fba88d6e89c1572dae03732d34e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.555492  311037 certs.go:382] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt.a66e11a3 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt
	I1216 05:06:28.555575  311037 certs.go:386] copying /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key.a66e11a3 -> /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key
	I1216 05:06:28.555628  311037 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key
	I1216 05:06:28.555642  311037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt with IP's: []
	I1216 05:06:28.620664  311037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt ...
	I1216 05:06:28.620689  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt: {Name:mk47ced84380f06c3cafcd86283bb954b01bc3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.620837  311037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key ...
	I1216 05:06:28.620848  311037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key: {Name:mk482284df6651ebb6b5b1982bad63eda7eff263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:28.621015  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem (1338 bytes)
	W1216 05:06:28.621065  311037 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592_empty.pem, impossibly tiny 0 bytes
	I1216 05:06:28.621074  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:06:28.621097  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:06:28.621121  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:06:28.621146  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/certs/key.pem (1679 bytes)
	I1216 05:06:28.621194  311037 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem (1708 bytes)
	I1216 05:06:28.621715  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:06:28.640424  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:06:28.657804  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:06:28.675307  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 05:06:28.694121  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:06:28.714715  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:06:28.738949  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:06:28.758519  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/calico-548714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:06:28.776566  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:06:28.794882  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/certs/8592.pem --> /usr/share/ca-certificates/8592.pem (1338 bytes)
	I1216 05:06:28.812343  311037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/ssl/certs/85922.pem --> /usr/share/ca-certificates/85922.pem (1708 bytes)
	I1216 05:06:28.829774  311037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:06:28.843341  311037 ssh_runner.go:195] Run: openssl version
	I1216 05:06:28.849472  311037 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.856988  311037 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85922.pem /etc/ssl/certs/85922.pem
	I1216 05:06:28.864221  311037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.867909  311037 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:34 /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.867975  311037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85922.pem
	I1216 05:06:28.902512  311037 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:28.910515  311037 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85922.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:06:28.917541  311037 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.924389  311037 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:06:28.931188  311037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.934791  311037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:25 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.934838  311037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:06:28.972087  311037 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:06:28.980126  311037 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:06:28.987849  311037 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8592.pem
	I1216 05:06:28.995182  311037 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8592.pem /etc/ssl/certs/8592.pem
	I1216 05:06:29.002642  311037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8592.pem
	I1216 05:06:29.006377  311037 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:34 /usr/share/ca-certificates/8592.pem
	I1216 05:06:29.006430  311037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8592.pem
	I1216 05:06:29.044098  311037 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:06:29.053355  311037 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8592.pem /etc/ssl/certs/51391683.0
	I1216 05:06:29.061363  311037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:06:29.064866  311037 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 05:06:29.064916  311037 kubeadm.go:401] StartCluster: {Name:calico-548714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-548714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:06:29.064995  311037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:06:29.065050  311037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:06:29.095705  311037 cri.go:89] found id: ""
	I1216 05:06:29.095768  311037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:06:29.105306  311037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:29.113972  311037 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:29.114067  311037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:29.121992  311037 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:29.122094  311037 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:29.122150  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:06:29.130831  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:29.130892  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:29.138589  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:06:29.147365  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:29.147412  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:29.155659  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:06:29.163613  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:29.163668  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:29.171124  311037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:06:29.179344  311037 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:29.179393  311037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:29.187335  311037 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:29.234049  311037 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 05:06:29.234122  311037 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:06:29.265387  311037 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:06:29.265482  311037 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1216 05:06:29.265566  311037 kubeadm.go:319] OS: Linux
	I1216 05:06:29.265643  311037 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:06:29.265708  311037 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:06:29.265770  311037 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:06:29.265837  311037 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:06:29.265934  311037 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:06:29.266060  311037 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:06:29.266134  311037 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:06:29.266210  311037 kubeadm.go:319] CGROUPS_IO: enabled
	I1216 05:06:29.346130  311037 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:06:29.346342  311037 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:06:29.346501  311037 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:06:29.362538  311037 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:06:29.109060  306433 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 05:06:29.113727  306433 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 05:06:29.113748  306433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 05:06:29.126936  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 05:06:29.403145  306433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:06:29.403317  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:29.403419  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-548714 minikube.k8s.io/updated_at=2025_12_16T05_06_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kindnet-548714 minikube.k8s.io/primary=true
	I1216 05:06:29.415984  306433 ops.go:34] apiserver oom_adj: -16
	I1216 05:06:29.508360  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:30.008491  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:29.365566  311037 out.go:252]   - Generating certificates and keys ...
	I1216 05:06:29.365670  311037 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:06:29.365776  311037 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:06:29.759942  311037 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 05:06:30.402951  311037 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 05:06:30.781719  311037 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 05:06:30.944764  311037 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 05:06:31.045678  311037 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 05:06:31.045912  311037 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-548714 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:06:31.405068  311037 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 05:06:31.405258  311037 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-548714 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 05:06:31.613446  311037 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 05:06:30.509371  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:31.008762  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:31.509201  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:32.009210  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:32.509228  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:33.009266  306433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:06:33.086982  306433 kubeadm.go:1114] duration metric: took 3.68371763s to wait for elevateKubeSystemPrivileges
	I1216 05:06:33.087037  306433 kubeadm.go:403] duration metric: took 14.49904575s to StartCluster
	I1216 05:06:33.087060  306433 settings.go:142] acquiring lock: {Name:mk0836201ed3f9b625f40b44e9b449c559be9ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:33.087139  306433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:06:33.088273  306433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/kubeconfig: {Name:mk851fa4b1fffcf46d4e99cafb0706cf30eee9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:06:33.088541  306433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:06:33.088547  306433 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:06:33.088632  306433 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:06:33.088729  306433 config.go:182] Loaded profile config "kindnet-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:06:33.088748  306433 addons.go:70] Setting default-storageclass=true in profile "kindnet-548714"
	I1216 05:06:33.088730  306433 addons.go:70] Setting storage-provisioner=true in profile "kindnet-548714"
	I1216 05:06:33.088777  306433 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-548714"
	I1216 05:06:33.088788  306433 addons.go:239] Setting addon storage-provisioner=true in "kindnet-548714"
	I1216 05:06:33.088809  306433 host.go:66] Checking if "kindnet-548714" exists ...
	I1216 05:06:33.089196  306433 cli_runner.go:164] Run: docker container inspect kindnet-548714 --format={{.State.Status}}
	I1216 05:06:33.089395  306433 cli_runner.go:164] Run: docker container inspect kindnet-548714 --format={{.State.Status}}
	I1216 05:06:33.090949  306433 out.go:179] * Verifying Kubernetes components...
	I1216 05:06:33.094565  306433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:06:33.114831  306433 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:06:32.075945  311037 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 05:06:32.188261  311037 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 05:06:32.188347  311037 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:06:32.452330  311037 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:06:32.645202  311037 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:06:32.854611  311037 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:06:33.284790  311037 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:06:33.374669  311037 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:06:33.376379  311037 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:06:33.380357  311037 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:06:33.115775  306433 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:06:33.115789  306433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:06:33.115841  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:33.116527  306433 addons.go:239] Setting addon default-storageclass=true in "kindnet-548714"
	I1216 05:06:33.116558  306433 host.go:66] Checking if "kindnet-548714" exists ...
	I1216 05:06:33.116880  306433 cli_runner.go:164] Run: docker container inspect kindnet-548714 --format={{.State.Status}}
	I1216 05:06:33.148063  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:33.149964  306433 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:06:33.150036  306433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:06:33.150120  306433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-548714
	I1216 05:06:33.174853  306433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/kindnet-548714/id_rsa Username:docker}
	I1216 05:06:33.200484  306433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:06:33.251396  306433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:06:33.270446  306433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:06:33.289851  306433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:06:33.397232  306433 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1216 05:06:33.399150  306433 node_ready.go:35] waiting up to 15m0s for node "kindnet-548714" to be "Ready" ...
	I1216 05:06:33.630157  306433 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1216 05:06:31.217743  302822 node_ready.go:57] node "auto-548714" has "Ready":"False" status (will retry)
	W1216 05:06:33.219142  302822 node_ready.go:57] node "auto-548714" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 16 05:05:49 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:05:49.886006175Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:05:49 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:05:49.890086417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:05:49 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:05:49.890111378Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.045866394Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f6a40e70-66b6-484d-a88a-c87d81bc3e39 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.049328432Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8cd5b7a6-efe6-4352-b417-c2c88c7a6d18 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.052819614Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper" id=661ce33b-f483-4808-99f8-a2c5f4d2d3ff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.052982563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.061480242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.062120284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.087709672Z" level=info msg="Created container 239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper" id=661ce33b-f483-4808-99f8-a2c5f4d2d3ff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.08850349Z" level=info msg="Starting container: 239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe" id=1dcae4a5-487a-49ab-a4e5-e156c47903ab name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.090281112Z" level=info msg="Started container" PID=1775 containerID=239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper id=1dcae4a5-487a-49ab-a4e5-e156c47903ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e1af71ba078b88311b22517d4123cdfb8e79bd1bb51f536b9bd92a59bea4afe
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.16213144Z" level=info msg="Removing container: 9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee" id=7c485dc5-84c7-4fc7-85f4-9d75f0756b18 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:06:03 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:03.177933893Z" level=info msg="Removed container 9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg/dashboard-metrics-scraper" id=7c485dc5-84c7-4fc7-85f4-9d75f0756b18 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.18545241Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=81cd8321-e492-4b01-bab1-768bd5e4f10b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.186426026Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fbcd978-a887-4762-9c51-ba7e93d68363 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.188492896Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=631cbaa3-46bf-4739-b982-1dc338979b91 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.188633879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198186581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198372311Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f7df6ecc2f1bd19d1e29a6525104fe10eca18d502143edefbf48a84c7de9e03c/merged/etc/passwd: no such file or directory"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198403476Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f7df6ecc2f1bd19d1e29a6525104fe10eca18d502143edefbf48a84c7de9e03c/merged/etc/group: no such file or directory"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.198710191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.281540218Z" level=info msg="Created container aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1: kube-system/storage-provisioner/storage-provisioner" id=631cbaa3-46bf-4739-b982-1dc338979b91 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.283196611Z" level=info msg="Starting container: aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1" id=77470a2b-462d-427a-bc9e-5a1a78dc92b8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:06:10 default-k8s-diff-port-375252 crio[570]: time="2025-12-16T05:06:10.286715409Z" level=info msg="Started container" PID=1789 containerID=aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1 description=kube-system/storage-provisioner/storage-provisioner id=77470a2b-462d-427a-bc9e-5a1a78dc92b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8aaaa33b3fab90063d00571412e5932f5e68f99732f780e56bc06ebbfdc8ff05
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	aafdcce5933f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   8aaaa33b3fab9       storage-provisioner                                    kube-system
	239478b010936       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   2e1af71ba078b       dashboard-metrics-scraper-6ffb444bf9-gsxcg             kubernetes-dashboard
	227abf9a6095b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   b021e84e7b30e       kubernetes-dashboard-855c9754f9-bbfmr                  kubernetes-dashboard
	676776061bc68       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   b0e033aa7db48       coredns-66bc5c9577-65x5z                               kube-system
	74770f4787beb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   a32a6bf7b9285       busybox                                                default
	fa05464129c4c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   0326bd38f80c1       kindnet-xml94                                          kube-system
	d7bcd1550ff49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   8aaaa33b3fab9       storage-provisioner                                    kube-system
	e04d861b0da30       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   5e6147af5af81       kube-proxy-dmw9t                                       kube-system
	3643b4b2bf495       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   c9c38a97a0748       kube-apiserver-default-k8s-diff-port-375252            kube-system
	fb5bb0a8d03a4       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   fa4ca6bbbc69c       kube-scheduler-default-k8s-diff-port-375252            kube-system
	bdd567a96f2d0       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   3ea36d2405bdf       kube-controller-manager-default-k8s-diff-port-375252   kube-system
	5031357bebf46       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   2f2bdddc2b036       etcd-default-k8s-diff-port-375252                      kube-system
	
	
	==> coredns [676776061bc68b44dab6f7b1af084b1b49c0ccdbceaf6cf33a84c892af410ff0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50395 - 11661 "HINFO IN 6607428354816222733.9146165306431085929. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045599452s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-375252
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-375252
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=default-k8s-diff-port-375252
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_04_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:04:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-375252
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:06:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:06:09 +0000   Tue, 16 Dec 2025 05:04:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-375252
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                226f3bd9-396d-4746-a6f5-231ce56a4f47
	  Boot ID:                    0c1042b4-53ca-4ff2-bf37-0a698e2224f2
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-65x5z                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-default-k8s-diff-port-375252                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-xml94                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-375252             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-375252    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-dmw9t                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-375252             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gsxcg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bbfmr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           113s               node-controller  Node default-k8s-diff-port-375252 event: Registered Node default-k8s-diff-port-375252 in Controller
	  Normal  NodeReady                101s               kubelet          Node default-k8s-diff-port-375252 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-375252 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-375252 event: Registered Node default-k8s-diff-port-375252 in Controller
	
	
	==> dmesg <==
	[  +0.086163] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023851] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.106236] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 04:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.044214] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023887] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023863] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +1.024886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +2.046823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[Dec16 04:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[  +8.447119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +16.382269] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	[ +32.252603] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 12 eb 96 c9 3b 06 73 42 0a b3 1f 08 00
	
	
	==> etcd [5031357bebf46792e8045505b602649ca344573ae8ce94fdd3cdf19add508e01] <==
	{"level":"warn","ts":"2025-12-16T05:05:37.563378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.569594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.575931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.591234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.597417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.603833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.610854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.617972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.624457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.632135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.639273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.645621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.654272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.661819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.668394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.680438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.687405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.694304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:05:37.749313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42520","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T05:06:09.500117Z","caller":"traceutil/trace.go:172","msg":"trace[1333349480] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"111.60101ms","start":"2025-12-16T05:06:09.388496Z","end":"2025-12-16T05:06:09.500098Z","steps":["trace[1333349480] 'process raft request'  (duration: 111.439888ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T05:06:10.005882Z","caller":"traceutil/trace.go:172","msg":"trace[1494257111] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"153.294419ms","start":"2025-12-16T05:06:09.852571Z","end":"2025-12-16T05:06:10.005865Z","steps":["trace[1494257111] 'process raft request'  (duration: 153.170315ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:21.389091Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.8556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:06:21.389205Z","caller":"traceutil/trace.go:172","msg":"trace[2084687672] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:627; }","duration":"196.013758ms","start":"2025-12-16T05:06:21.193173Z","end":"2025-12-16T05:06:21.389187Z","steps":["trace[2084687672] 'range keys from in-memory index tree'  (duration: 195.781042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T05:06:21.389093Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.36499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T05:06:21.389400Z","caller":"traceutil/trace.go:172","msg":"trace[907062882] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:627; }","duration":"115.706906ms","start":"2025-12-16T05:06:21.273672Z","end":"2025-12-16T05:06:21.389379Z","steps":["trace[907062882] 'range keys from in-memory index tree'  (duration: 115.300269ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:06:35 up 49 min,  0 user,  load average: 5.80, 3.52, 2.26
	Linux default-k8s-diff-port-375252 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fa05464129c4cd5d2065744b02dba6d46de76d42ffb639148532972ad4ace1c8] <==
	I1216 05:05:39.600358       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:05:39.600619       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 05:05:39.600782       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:05:39.600799       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:05:39.600820       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:05:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:05:39.896369       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:05:39.896472       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:05:39.896517       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:05:39.896701       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:05:40.297080       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:05:40.297116       1 metrics.go:72] Registering metrics
	I1216 05:05:40.297226       1 controller.go:711] "Syncing nftables rules"
	I1216 05:05:49.869605       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:05:49.869652       1 main.go:301] handling current node
	I1216 05:05:59.876122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:05:59.876162       1 main.go:301] handling current node
	I1216 05:06:09.868742       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:06:09.868775       1 main.go:301] handling current node
	I1216 05:06:19.872112       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:06:19.872165       1 main.go:301] handling current node
	I1216 05:06:29.878104       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:06:29.878141       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3643b4b2bf4956cd9f2eb844e904fb1254518470bcdf64a25458bd5eb92c0e45] <==
	I1216 05:05:38.232175       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:05:38.232868       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 05:05:38.232887       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 05:05:38.233063       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:05:38.233253       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:05:38.233388       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 05:05:38.233460       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 05:05:38.233712       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 05:05:38.233787       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:05:38.243928       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1216 05:05:38.248061       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:05:38.258137       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:05:38.295887       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:05:38.310959       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 05:05:38.557297       1 controller.go:667] quota admission added evaluator for: namespaces
	I1216 05:05:38.584425       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:05:38.602007       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:05:38.608967       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:05:38.616685       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:05:38.649826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.129.30"}
	I1216 05:05:38.662404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.57.111"}
	I1216 05:05:39.135747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:05:41.998870       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:05:42.147414       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 05:05:42.198816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bdd567a96f2d0218b0b849ff408a4edea382a9b67a03bacb080f485537541fff] <==
	I1216 05:05:41.562385       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 05:05:41.575659       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:05:41.575722       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:05:41.575774       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:05:41.575780       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:05:41.575785       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:05:41.576840       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 05:05:41.579046       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 05:05:41.594491       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 05:05:41.594525       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:05:41.594550       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 05:05:41.594675       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 05:05:41.594790       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 05:05:41.595050       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 05:05:41.595699       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 05:05:41.595725       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1216 05:05:41.599990       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 05:05:41.600056       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:05:41.601176       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 05:05:41.603517       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 05:05:41.605854       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 05:05:41.605916       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 05:05:41.605987       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-375252"
	I1216 05:05:41.606071       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 05:05:41.617934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e04d861b0da300bb613395b582a587069a355bde3fc0afcfb3f515297db72226] <==
	I1216 05:05:39.481550       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:05:39.556597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:05:39.657692       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:05:39.658043       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 05:05:39.658322       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:05:39.676686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:05:39.676736       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:05:39.681575       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:05:39.681973       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:05:39.682048       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:39.683241       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:05:39.683263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:05:39.683275       1 config.go:200] "Starting service config controller"
	I1216 05:05:39.683289       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:05:39.683301       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:05:39.683307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:05:39.683321       1 config.go:309] "Starting node config controller"
	I1216 05:05:39.683326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:05:39.683332       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:05:39.783483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:05:39.783488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:05:39.783496       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fb5bb0a8d03a44c04ffd5a300e8e2e7d9265b989ee99b10e3bdfaaab6f4360e3] <==
	I1216 05:05:36.483555       1 serving.go:386] Generated self-signed cert in-memory
	W1216 05:05:38.172112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:05:38.172164       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:05:38.172176       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:05:38.172185       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:05:38.212957       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:05:38.213101       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:05:38.216641       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:38.216782       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:05:38.217089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:05:38.217218       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:05:38.317879       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 05:05:42 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:42.222428     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshhg\" (UniqueName: \"kubernetes.io/projected/ce088b60-6c16-42c0-8eb6-514021eb3e34-kube-api-access-dshhg\") pod \"kubernetes-dashboard-855c9754f9-bbfmr\" (UID: \"ce088b60-6c16-42c0-8eb6-514021eb3e34\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bbfmr"
	Dec 16 05:05:45 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:45.872861     736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 16 05:05:46 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:46.136932     736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bbfmr" podStartSLOduration=0.943861304 podStartE2EDuration="4.136907827s" podCreationTimestamp="2025-12-16 05:05:42 +0000 UTC" firstStartedPulling="2025-12-16 05:05:42.456613329 +0000 UTC m=+7.500407235" lastFinishedPulling="2025-12-16 05:05:45.649659862 +0000 UTC m=+10.693453758" observedRunningTime="2025-12-16 05:05:46.1364839 +0000 UTC m=+11.180277815" watchObservedRunningTime="2025-12-16 05:05:46.136907827 +0000 UTC m=+11.180701741"
	Dec 16 05:05:48 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:48.115633     736 scope.go:117] "RemoveContainer" containerID="d9dc8cf62cd789c1ebe249a4423bc4186f245e326e0e3d374818527474bfb896"
	Dec 16 05:05:49 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:49.122672     736 scope.go:117] "RemoveContainer" containerID="d9dc8cf62cd789c1ebe249a4423bc4186f245e326e0e3d374818527474bfb896"
	Dec 16 05:05:49 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:49.123604     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:05:49 default-k8s-diff-port-375252 kubelet[736]: E1216 05:05:49.123848     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:05:50 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:50.126329     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:05:50 default-k8s-diff-port-375252 kubelet[736]: E1216 05:05:50.126490     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:05:51 default-k8s-diff-port-375252 kubelet[736]: I1216 05:05:51.128788     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:05:51 default-k8s-diff-port-375252 kubelet[736]: E1216 05:05:51.128960     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:03.045279     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:03.160826     736 scope.go:117] "RemoveContainer" containerID="9c997805bf139b00c0bcd373c0c10fe06e6e37f2f76571e760e720b3b4f813ee"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:03.161166     736 scope.go:117] "RemoveContainer" containerID="239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	Dec 16 05:06:03 default-k8s-diff-port-375252 kubelet[736]: E1216 05:06:03.161362     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:09 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:09.756559     736 scope.go:117] "RemoveContainer" containerID="239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	Dec 16 05:06:09 default-k8s-diff-port-375252 kubelet[736]: E1216 05:06:09.756814     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:10 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:10.184988     736 scope.go:117] "RemoveContainer" containerID="d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb"
	Dec 16 05:06:21 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:21.044230     736 scope.go:117] "RemoveContainer" containerID="239478b01093679e6e51ceab487d87dec68991515f1936afdf0346802e8c6bfe"
	Dec 16 05:06:21 default-k8s-diff-port-375252 kubelet[736]: E1216 05:06:21.044425     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gsxcg_kubernetes-dashboard(d12c4edd-7fe0-4267-8bb3-ba84609ec079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gsxcg" podUID="d12c4edd-7fe0-4267-8bb3-ba84609ec079"
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:06:30 default-k8s-diff-port-375252 kubelet[736]: I1216 05:06:30.011296     736 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:06:30 default-k8s-diff-port-375252 systemd[1]: kubelet.service: Consumed 1.799s CPU time.
	
	
	==> kubernetes-dashboard [227abf9a6095bd5d2d1170608d9021974081bc4722607f7541d1e12fb67fe96c] <==
	2025/12/16 05:05:45 Using namespace: kubernetes-dashboard
	2025/12/16 05:05:45 Using in-cluster config to connect to apiserver
	2025/12/16 05:05:45 Using secret token for csrf signing
	2025/12/16 05:05:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/16 05:05:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/16 05:05:45 Successful initial request to the apiserver, version: v1.34.2
	2025/12/16 05:05:45 Generating JWE encryption key
	2025/12/16 05:05:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/16 05:05:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/16 05:05:45 Initializing JWE encryption key from synchronized object
	2025/12/16 05:05:45 Creating in-cluster Sidecar client
	2025/12/16 05:05:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:45 Serving insecurely on HTTP port: 9090
	2025/12/16 05:06:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/16 05:05:45 Starting overwatch
	
	
	==> storage-provisioner [aafdcce5933f6c71bca019afecc735dcf059240e01b19f74e61e13d2997537e1] <==
	I1216 05:06:10.626549       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 05:06:10.636241       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 05:06:10.636295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1216 05:06:10.638658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:14.094929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:18.355249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:21.953605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:25.007858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:28.031065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:28.034963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:06:28.035153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 05:06:28.035296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1dfb198-3305-4c9d-9019-8ec75faa390c", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-375252_92472f9f-936d-41d5-831b-069edeb14337 became leader
	I1216 05:06:28.035338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-375252_92472f9f-936d-41d5-831b-069edeb14337!
	W1216 05:06:28.037966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:28.041258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1216 05:06:28.135643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-375252_92472f9f-936d-41d5-831b-069edeb14337!
	W1216 05:06:30.046147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:30.051107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:32.055436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:32.062716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:34.065555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 05:06:34.069295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7bcd1550ff4978f686de51b0a89276350f63f70a69051a18651034993bd6ecb] <==
	I1216 05:05:39.444621       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 05:06:09.446715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252: exit status 2 (409.902768ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-375252 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.68s)

                                                
                                    

Test pass (351/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 2.99
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.2
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.82
31 TestOffline 54.2
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 123.53
40 TestAddons/serial/GCPAuth/Namespaces 0.16
41 TestAddons/serial/GCPAuth/FakeCredentials 8.43
57 TestAddons/StoppedEnableDisable 16.78
58 TestCertOptions 26.79
59 TestCertExpiration 207.92
61 TestForceSystemdFlag 22.85
62 TestForceSystemdEnv 42.71
67 TestErrorSpam/setup 22.86
68 TestErrorSpam/start 0.65
69 TestErrorSpam/status 0.92
70 TestErrorSpam/pause 6.19
71 TestErrorSpam/unpause 4.83
72 TestErrorSpam/stop 8.11
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 66.43
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.99
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
84 TestFunctional/serial/CacheCmd/cache/add_local 0.88
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 45.55
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.17
95 TestFunctional/serial/LogsFileCmd 1.18
96 TestFunctional/serial/InvalidService 3.94
98 TestFunctional/parallel/ConfigCmd 0.41
99 TestFunctional/parallel/DashboardCmd 4.73
100 TestFunctional/parallel/DryRun 0.44
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 1.11
106 TestFunctional/parallel/ServiceCmdConnect 6.72
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 27.07
110 TestFunctional/parallel/SSHCmd 0.65
111 TestFunctional/parallel/CpCmd 1.67
112 TestFunctional/parallel/MySQL 45.41
113 TestFunctional/parallel/FileSync 0.28
114 TestFunctional/parallel/CertSync 1.67
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
122 TestFunctional/parallel/License 0.25
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.16
124 TestFunctional/parallel/Version/short 0.08
125 TestFunctional/parallel/Version/components 0.53
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
130 TestFunctional/parallel/ImageCommands/ImageBuild 2.87
131 TestFunctional/parallel/ImageCommands/Setup 0.39
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.94
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.33
141 TestFunctional/parallel/ServiceCmd/List 0.39
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.8
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
145 TestFunctional/parallel/ServiceCmd/Format 0.56
146 TestFunctional/parallel/ServiceCmd/URL 0.42
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
148 TestFunctional/parallel/ProfileCmd/profile_list 0.47
149 TestFunctional/parallel/MountCmd/any-port 6.93
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
152 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
153 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
155 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 6.2
156 TestFunctional/parallel/MountCmd/specific-port 1.87
157 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
158 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
162 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 66.51
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 14.65
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.45
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.83
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.51
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 66.52
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.17
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.2
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.37
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 6.55
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.4
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.18
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.96
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 8.53
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 18.43
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.6
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.9
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 22.06
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.74
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.64
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.28
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.09
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.69
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 5.93
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.18
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.25
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.16
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 9.22
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.84
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.33
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.49
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.58
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.49
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.49
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.34
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.35
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.34
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.43
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.43
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.92
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.19
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.19
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.96
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.06
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 139.12
266 TestMultiControlPlane/serial/DeployApp 4.46
267 TestMultiControlPlane/serial/PingHostFromPods 1.08
268 TestMultiControlPlane/serial/AddWorkerNode 56.43
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
271 TestMultiControlPlane/serial/CopyFile 16.92
272 TestMultiControlPlane/serial/StopSecondaryNode 13.78
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.45
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 102.38
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.47
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
279 TestMultiControlPlane/serial/StopCluster 48.49
280 TestMultiControlPlane/serial/RestartCluster 54.31
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
282 TestMultiControlPlane/serial/AddSecondaryNode 39.67
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
288 TestJSONOutput/start/Command 39.74
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.18
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.24
313 TestKicCustomNetwork/create_custom_network 25.33
314 TestKicCustomNetwork/use_default_bridge_network 21.47
315 TestKicExistingNetwork 25.53
316 TestKicCustomSubnet 26
317 TestKicStaticIP 25.48
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 47.54
322 TestMountStart/serial/StartWithMountFirst 4.7
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 4.65
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.69
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.2
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 63.57
334 TestMultiNode/serial/DeployApp2Nodes 3.38
335 TestMultiNode/serial/PingHostFrom2Pods 0.74
336 TestMultiNode/serial/AddNode 22.87
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.64
339 TestMultiNode/serial/CopyFile 9.74
340 TestMultiNode/serial/StopNode 2.21
341 TestMultiNode/serial/StartAfterStop 7.12
342 TestMultiNode/serial/RestartKeepsNodes 77.6
343 TestMultiNode/serial/DeleteNode 5.19
344 TestMultiNode/serial/StopMultiNode 28.47
345 TestMultiNode/serial/RestartMultiNode 48.99
346 TestMultiNode/serial/ValidateNameConflict 24.87
351 TestPreload 81.17
353 TestScheduledStopUnix 97.87
356 TestInsufficientStorage 11.61
357 TestRunningBinaryUpgrade 45.32
359 TestKubernetesUpgrade 293.21
360 TestMissingContainerUpgrade 69.48
362 TestStoppedBinaryUpgrade/Setup 0.71
363 TestPause/serial/Start 88.88
364 TestStoppedBinaryUpgrade/Upgrade 303.32
365 TestPause/serial/SecondStartNoReconfiguration 6.83
375 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
376 TestNoKubernetes/serial/StartWithK8s 21.03
377 TestNoKubernetes/serial/StartWithStopK8s 15.92
378 TestNoKubernetes/serial/Start 6.75
379 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
380 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
381 TestNoKubernetes/serial/ProfileList 1.69
382 TestNoKubernetes/serial/Stop 1.28
383 TestNoKubernetes/serial/StartNoArgs 6.36
384 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
392 TestNetworkPlugins/group/false 3.43
397 TestStartStop/group/old-k8s-version/serial/FirstStart 50.39
398 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
400 TestStartStop/group/no-preload/serial/FirstStart 45.19
401 TestStartStop/group/old-k8s-version/serial/DeployApp 9.27
403 TestStartStop/group/embed-certs/serial/FirstStart 40.02
405 TestStartStop/group/old-k8s-version/serial/Stop 16.12
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
407 TestStartStop/group/old-k8s-version/serial/SecondStart 51.51
409 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.72
410 TestStartStop/group/no-preload/serial/DeployApp 9.32
411 TestStartStop/group/embed-certs/serial/DeployApp 8.24
413 TestStartStop/group/no-preload/serial/Stop 18.77
415 TestStartStop/group/embed-certs/serial/Stop 18.15
416 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
417 TestStartStop/group/no-preload/serial/SecondStart 46.31
418 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
420 TestStartStop/group/embed-certs/serial/SecondStart 45.43
421 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
423 TestStartStop/group/default-k8s-diff-port/serial/Stop 20.31
424 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
425 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
428 TestStartStop/group/newest-cni/serial/FirstStart 25.19
429 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
430 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.64
431 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
432 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
433 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
434 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
436 TestStartStop/group/newest-cni/serial/DeployApp 0
438 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
439 TestStartStop/group/newest-cni/serial/Stop 2.45
440 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
441 TestStartStop/group/newest-cni/serial/SecondStart 12.42
442 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
444 TestNetworkPlugins/group/auto/Start 39.98
445 TestNetworkPlugins/group/kindnet/Start 43.24
446 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
447 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
448 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
450 TestNetworkPlugins/group/calico/Start 52.41
451 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
452 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
453 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
455 TestNetworkPlugins/group/auto/KubeletFlags 0.33
456 TestNetworkPlugins/group/auto/NetCatPod 9.27
457 TestNetworkPlugins/group/custom-flannel/Start 53.81
458 TestNetworkPlugins/group/auto/DNS 0.13
459 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
460 TestNetworkPlugins/group/auto/Localhost 0.1
461 TestNetworkPlugins/group/auto/HairPin 0.09
462 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
463 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
464 TestNetworkPlugins/group/kindnet/DNS 0.14
465 TestNetworkPlugins/group/kindnet/Localhost 0.15
466 TestNetworkPlugins/group/kindnet/HairPin 0.11
467 TestNetworkPlugins/group/calico/ControllerPod 6.01
468 TestNetworkPlugins/group/enable-default-cni/Start 62.57
469 TestNetworkPlugins/group/calico/KubeletFlags 0.37
470 TestNetworkPlugins/group/calico/NetCatPod 9.24
471 TestNetworkPlugins/group/calico/DNS 0.14
472 TestNetworkPlugins/group/calico/Localhost 0.1
473 TestNetworkPlugins/group/calico/HairPin 0.1
474 TestNetworkPlugins/group/flannel/Start 45.18
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
477 TestNetworkPlugins/group/custom-flannel/DNS 0.11
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
480 TestNetworkPlugins/group/bridge/Start 62.42
481 TestNetworkPlugins/group/flannel/ControllerPod 6
482 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
483 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
484 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
485 TestNetworkPlugins/group/flannel/NetCatPod 8.17
486 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
487 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
488 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
489 TestNetworkPlugins/group/flannel/DNS 0.1
490 TestNetworkPlugins/group/flannel/Localhost 0.08
491 TestNetworkPlugins/group/flannel/HairPin 0.08
492 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
493 TestNetworkPlugins/group/bridge/NetCatPod 8.16
494 TestNetworkPlugins/group/bridge/DNS 0.11
495 TestNetworkPlugins/group/bridge/Localhost 0.08
496 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (5.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-064794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-064794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.423569716s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 04:25:14.926053    8592 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1216 04:25:14.926150    8592 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-064794
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-064794: exit status 85 (70.484567ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-064794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-064794 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:09.556288    8604 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:09.556383    8604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:09.556393    8604 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:09.556397    8604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:09.556606    8604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	W1216 04:25:09.556736    8604 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22141-5066/.minikube/config/config.json: open /home/jenkins/minikube-integration/22141-5066/.minikube/config/config.json: no such file or directory
	I1216 04:25:09.557269    8604 out.go:368] Setting JSON to true
	I1216 04:25:09.558165    8604 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":461,"bootTime":1765858649,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:09.558225    8604 start.go:143] virtualization: kvm guest
	I1216 04:25:09.562182    8604 out.go:99] [download-only-064794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1216 04:25:09.562321    8604 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 04:25:09.562377    8604 notify.go:221] Checking for updates...
	I1216 04:25:09.563742    8604 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:25:09.565157    8604 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:09.566448    8604 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:25:09.567629    8604 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:25:09.568767    8604 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 04:25:09.570902    8604 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:25:09.571168    8604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:09.596431    8604 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:25:09.596500    8604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:09.817320    8604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-16 04:25:09.808080844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:09.817428    8604 docker.go:319] overlay module found
	I1216 04:25:09.818879    8604 out.go:99] Using the docker driver based on user configuration
	I1216 04:25:09.818906    8604 start.go:309] selected driver: docker
	I1216 04:25:09.818912    8604 start.go:927] validating driver "docker" against <nil>
	I1216 04:25:09.818987    8604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:09.876783    8604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-16 04:25:09.868535737 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:09.876953    8604 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:09.877533    8604 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1216 04:25:09.877683    8604 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:25:09.879554    8604 out.go:171] Using Docker driver with root privileges
	I1216 04:25:09.880769    8604 cni.go:84] Creating CNI manager for ""
	I1216 04:25:09.880827    8604 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:25:09.880837    8604 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:25:09.880889    8604 start.go:353] cluster config:
	{Name:download-only-064794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-064794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:25:09.882250    8604 out.go:99] Starting "download-only-064794" primary control-plane node in "download-only-064794" cluster
	I1216 04:25:09.882283    8604 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:25:09.883529    8604 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:25:09.883567    8604 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:25:09.883681    8604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:25:09.898301    8604 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:09.898337    8604 cache.go:65] Caching tarball of preloaded images
	I1216 04:25:09.898502    8604 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:25:09.899724    8604 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:25:09.899919    8604 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 04:25:09.899943    8604 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1216 04:25:09.899967    8604 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1216 04:25:09.900029    8604 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:25:09.925147    8604 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1216 04:25:09.925293    8604 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:12.614686    8604 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1216 04:25:12.615191    8604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/download-only-064794/config.json ...
	I1216 04:25:12.615233    8604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/download-only-064794/config.json: {Name:mke11935d8a3d661a1db97c7f56409c34b8ca8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:12.615421    8604 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:25:12.615590    8604 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22141-5066/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-064794 host does not exist
	  To start a cluster, run: "minikube start -p download-only-064794"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-064794
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-159842 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-159842 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.988338067s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 04:25:18.348364    8592 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 04:25:18.348394    8592 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-159842
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-159842: exit status 85 (74.461099ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-064794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-064794 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-064794                                                                                                                                                   │ download-only-064794 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-159842 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-159842 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:15.410369    8972 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:15.410476    8972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:15.410485    8972 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:15.410489    8972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:15.410695    8972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:25:15.411169    8972 out.go:368] Setting JSON to true
	I1216 04:25:15.411937    8972 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":466,"bootTime":1765858649,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:15.412000    8972 start.go:143] virtualization: kvm guest
	I1216 04:25:15.413694    8972 out.go:99] [download-only-159842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:25:15.413854    8972 notify.go:221] Checking for updates...
	I1216 04:25:15.415944    8972 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:25:15.417246    8972 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:15.418350    8972 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:25:15.419358    8972 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:25:15.420489    8972 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 04:25:15.422445    8972 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:25:15.422648    8972 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:15.444789    8972 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:25:15.444861    8972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:15.495790    8972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-16 04:25:15.486784573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:15.495895    8972 docker.go:319] overlay module found
	I1216 04:25:15.497535    8972 out.go:99] Using the docker driver based on user configuration
	I1216 04:25:15.497561    8972 start.go:309] selected driver: docker
	I1216 04:25:15.497567    8972 start.go:927] validating driver "docker" against <nil>
	I1216 04:25:15.497643    8972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:15.551602    8972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-16 04:25:15.541482163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:15.551853    8972 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:15.552590    8972 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1216 04:25:15.552808    8972 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:25:15.554586    8972 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-159842 host does not exist
	  To start a cluster, run: "minikube start -p download-only-159842"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-159842
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-317050 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-317050 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.201794629s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 04:25:21.998472    8592 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1216 04:25:21.998500    8592 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-317050
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-317050: exit status 85 (72.312512ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-064794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-064794 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-064794                                                                                                                                                          │ download-only-064794 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-159842 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-159842 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-159842                                                                                                                                                          │ download-only-159842 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-317050 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-317050 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:18.847542    9315 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:18.847766    9315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:18.847774    9315 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:18.847778    9315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:18.847988    9315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:25:18.848445    9315 out.go:368] Setting JSON to true
	I1216 04:25:18.849213    9315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":470,"bootTime":1765858649,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:18.849271    9315 start.go:143] virtualization: kvm guest
	I1216 04:25:18.851146    9315 out.go:99] [download-only-317050] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:25:18.851262    9315 notify.go:221] Checking for updates...
	I1216 04:25:18.852437    9315 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:25:18.853535    9315 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:18.854753    9315 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:25:18.856203    9315 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:25:18.857311    9315 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 04:25:18.859456    9315 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:25:18.859652    9315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:18.880779    9315 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:25:18.880889    9315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:18.937686    9315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 04:25:18.927606213 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:18.937829    9315 docker.go:319] overlay module found
	I1216 04:25:18.939402    9315 out.go:99] Using the docker driver based on user configuration
	I1216 04:25:18.939435    9315 start.go:309] selected driver: docker
	I1216 04:25:18.939444    9315 start.go:927] validating driver "docker" against <nil>
	I1216 04:25:18.939535    9315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:25:18.996175    9315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-16 04:25:18.986990905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:25:18.996339    9315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:18.996855    9315 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1216 04:25:18.997014    9315 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:25:18.998871    9315 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-317050 host does not exist
	  To start a cluster, run: "minikube start -p download-only-317050"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-317050
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-190309 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-190309" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-190309
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 04:25:23.266789    8592 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-231674 --alsologtostderr --binary-mirror http://127.0.0.1:36059 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-231674" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-231674
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (54.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-464497 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-464497 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (51.741737025s)
helpers_test.go:176: Cleaning up "offline-crio-464497" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-464497
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-464497: (2.455634243s)
--- PASS: TestOffline (54.20s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-920118
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-920118: exit status 85 (62.425273ms)

                                                
                                                
-- stdout --
	* Profile "addons-920118" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-920118"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-920118
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-920118: exit status 85 (62.195428ms)

                                                
                                                
-- stdout --
	* Profile "addons-920118" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-920118"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-920118 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-920118 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.531576596s)
--- PASS: TestAddons/Setup (123.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-920118 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-920118 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-920118 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-920118 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bc7a543f-c0f3-4c70-8051-a580aed61e46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bc7a543f-c0f3-4c70-8051-a580aed61e46] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003187757s
addons_test.go:696: (dbg) Run:  kubectl --context addons-920118 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-920118 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-920118 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-920118
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-920118: (16.503389293s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-920118
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-920118
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-920118
--- PASS: TestAddons/StoppedEnableDisable (16.78s)

                                                
                                    
x
+
TestCertOptions (26.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-550146 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-550146 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.739846861s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-550146 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-550146 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-550146 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-550146" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-550146
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-550146: (2.404074255s)
--- PASS: TestCertOptions (26.79s)

                                                
                                    
x
+
TestCertExpiration (207.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-123545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-123545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (18.960447602s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-123545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.789438814s)
helpers_test.go:176: Cleaning up "cert-expiration-123545" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-123545
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-123545: (3.1646798s)
--- PASS: TestCertExpiration (207.92s)

                                                
                                    
x
+
TestForceSystemdFlag (22.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-507334 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-507334 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.17223561s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-507334 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-507334" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-507334
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-507334: (2.395298865s)
--- PASS: TestForceSystemdFlag (22.85s)

                                                
                                    
x
+
TestForceSystemdEnv (42.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-515540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-515540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.015053351s)
helpers_test.go:176: Cleaning up "force-systemd-env-515540" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-515540
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-515540: (2.688525906s)
--- PASS: TestForceSystemdEnv (42.71s)

                                                
                                    
x
+
TestErrorSpam/setup (22.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-410054 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-410054 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-410054 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-410054 --driver=docker  --container-runtime=crio: (22.863913291s)
--- PASS: TestErrorSpam/setup (22.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (6.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause: exit status 80 (2.277955403s)

                                                
                                                
-- stdout --
	* Pausing node nospam-410054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:30:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause: exit status 80 (2.0472238s)

                                                
                                                
-- stdout --
	* Pausing node nospam-410054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:30:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause: exit status 80 (1.863209948s)

                                                
                                                
-- stdout --
	* Pausing node nospam-410054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause: exit status 80 (1.647865804s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-410054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:31:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause: exit status 80 (1.498148891s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-410054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:31:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause: exit status 80 (1.684840429s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-410054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.83s)

                                                
                                    
x
+
TestErrorSpam/stop (8.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 stop: (7.906611328s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-410054 --log_dir /tmp/nospam-410054 stop
--- PASS: TestErrorSpam/stop (8.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/test/nested/copy/8592/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-423958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-423958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m6.42532119s)
--- PASS: TestFunctional/serial/StartWithProxy (66.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 04:32:23.692670    8592 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-423958 --alsologtostderr -v=8
E1216 04:32:28.699958    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:28.706335    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:28.717720    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:28.739194    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:28.780583    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:28.861968    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:29.023492    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:29.346471    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-423958 --alsologtostderr -v=8: (5.987546184s)
functional_test.go:678: soft start took 5.98820197s for "functional-423958" cluster.
I1216 04:32:29.680523    8592 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (5.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-423958 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cache add registry.k8s.io/pause:3.1
E1216 04:32:29.988829    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cache add registry.k8s.io/pause:3.3
E1216 04:32:31.270880    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-423958 /tmp/TestFunctionalserialCacheCmdcacheadd_local3663692084/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cache add minikube-local-cache-test:functional-423958
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cache delete minikube-local-cache-test:functional-423958
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-423958
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh sudo crictl rmi registry.k8s.io/pause:latest
E1216 04:32:33.832286    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (277.042956ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 kubectl -- --context functional-423958 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-423958 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-423958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 04:32:38.954379    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:32:49.195758    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:09.677547    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-423958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.550795148s)
functional_test.go:776: restart took 45.550990789s for "functional-423958" cluster.
I1216 04:33:21.017757    8592 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (45.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-423958 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-423958 logs: (1.174618403s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 logs --file /tmp/TestFunctionalserialLogsFileCmd1240453701/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-423958 logs --file /tmp/TestFunctionalserialLogsFileCmd1240453701/001/logs.txt: (1.184054454s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-423958 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-423958
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-423958: exit status 115 (334.601718ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31781 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-423958 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 config get cpus: exit status 14 (79.179763ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 config get cpus: exit status 14 (66.401651ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-423958 --alsologtostderr -v=1]
E1216 04:33:50.639047    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-423958 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 48373: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-423958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-423958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.386285ms)

                                                
                                                
-- stdout --
	* [functional-423958] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:33:46.899415   46672 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:33:46.899530   46672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:33:46.899539   46672 out.go:374] Setting ErrFile to fd 2...
	I1216 04:33:46.899543   46672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:33:46.899746   46672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:33:46.900227   46672 out.go:368] Setting JSON to false
	I1216 04:33:46.901119   46672 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":978,"bootTime":1765858649,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:33:46.901205   46672 start.go:143] virtualization: kvm guest
	I1216 04:33:46.903171   46672 out.go:179] * [functional-423958] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:33:46.904646   46672 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:33:46.904647   46672 notify.go:221] Checking for updates...
	I1216 04:33:46.908907   46672 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:33:46.910458   46672 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:33:46.911689   46672 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:33:46.912909   46672 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:33:46.914002   46672 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:33:46.915711   46672 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:33:46.916456   46672 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:33:46.943324   46672 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:33:46.943476   46672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:33:47.015648   46672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 04:33:47.004484114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:33:47.015761   46672 docker.go:319] overlay module found
	I1216 04:33:47.017813   46672 out.go:179] * Using the docker driver based on existing profile
	I1216 04:33:47.018874   46672 start.go:309] selected driver: docker
	I1216 04:33:47.018889   46672 start.go:927] validating driver "docker" against &{Name:functional-423958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-423958 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:33:47.018994   46672 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:33:47.020711   46672 out.go:203] 
	W1216 04:33:47.021826   46672 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 04:33:47.022919   46672 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-423958 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-423958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-423958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (165.859688ms)

                                                
                                                
-- stdout --
	* [functional-423958] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:33:47.333325   47118 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:33:47.333593   47118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:33:47.333604   47118 out.go:374] Setting ErrFile to fd 2...
	I1216 04:33:47.333609   47118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:33:47.333910   47118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:33:47.334360   47118 out.go:368] Setting JSON to false
	I1216 04:33:47.335313   47118 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":978,"bootTime":1765858649,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:33:47.335369   47118 start.go:143] virtualization: kvm guest
	I1216 04:33:47.337289   47118 out.go:179] * [functional-423958] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 04:33:47.338641   47118 notify.go:221] Checking for updates...
	I1216 04:33:47.338680   47118 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:33:47.340053   47118 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:33:47.341458   47118 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:33:47.342892   47118 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:33:47.344275   47118 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:33:47.345642   47118 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:33:47.347338   47118 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:33:47.347958   47118 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:33:47.372275   47118 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:33:47.372369   47118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:33:47.429951   47118 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 04:33:47.420779082 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:33:47.430077   47118 docker.go:319] overlay module found
	I1216 04:33:47.431670   47118 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 04:33:47.432769   47118 start.go:309] selected driver: docker
	I1216 04:33:47.432781   47118 start.go:927] validating driver "docker" against &{Name:functional-423958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-423958 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:33:47.432855   47118 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:33:47.434551   47118 out.go:203] 
	W1216 04:33:47.435758   47118 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 04:33:47.436950   47118 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-423958 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-423958 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-vj8np" [c0a9b054-c97c-4ce1-bfb4-4635b84af6ac] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-vj8np" [c0a9b054-c97c-4ce1-bfb4-4635b84af6ac] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004049113s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31742
functional_test.go:1680: http://192.168.49.2:31742: success! body:
Request served by hello-node-connect-7d85dfc575-vj8np

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31742
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [0a816092-2837-404f-a46b-cd69e0f5e8e3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003978823s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-423958 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-423958 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-423958 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-423958 apply -f testdata/storage-provisioner/pod.yaml
I1216 04:33:33.883833    8592 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7dafc15c-51ce-44e5-9eaa-44c06d6d23a2] Pending
helpers_test.go:353: "sp-pod" [7dafc15c-51ce-44e5-9eaa-44c06d6d23a2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [7dafc15c-51ce-44e5-9eaa-44c06d6d23a2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004083274s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-423958 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-423958 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-423958 delete -f testdata/storage-provisioner/pod.yaml: (1.173985087s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-423958 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c4466a0b-cb84-4532-9893-14cdb7d236de] Pending
helpers_test.go:353: "sp-pod" [c4466a0b-cb84-4532-9893-14cdb7d236de] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.052862203s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-423958 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh -n functional-423958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cp functional-423958:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1123307184/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh -n functional-423958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh -n functional-423958 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (45.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-423958 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-24tn5" [07628364-3679-4f33-95ff-e362d011ada8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-24tn5" [07628364-3679-4f33-95ff-e362d011ada8] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 40.003597348s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-423958 exec mysql-6bcdcbc558-24tn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-423958 exec mysql-6bcdcbc558-24tn5 -- mysql -ppassword -e "show databases;": exit status 1 (112.27345ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:34:11.358134    8592 retry.go:31] will retry after 1.431862381s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-423958 exec mysql-6bcdcbc558-24tn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-423958 exec mysql-6bcdcbc558-24tn5 -- mysql -ppassword -e "show databases;": exit status 1 (87.679356ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:34:12.878562    8592 retry.go:31] will retry after 1.122205707s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-423958 exec mysql-6bcdcbc558-24tn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-423958 exec mysql-6bcdcbc558-24tn5 -- mysql -ppassword -e "show databases;": exit status 1 (87.327541ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:34:14.089288    8592 retry.go:31] will retry after 2.304665614s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-423958 exec mysql-6bcdcbc558-24tn5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (45.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8592/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo cat /etc/test/nested/copy/8592/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo cat /etc/ssl/certs/8592.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo cat /usr/share/ca-certificates/8592.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/85922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo cat /etc/ssl/certs/85922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/85922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo cat /usr/share/ca-certificates/85922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-423958 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh "sudo systemctl is-active docker": exit status 1 (293.587928ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh "sudo systemctl is-active containerd": exit status 1 (280.414282ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-423958 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-423958 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-h75rl" [69bde553-6d42-4a2f-9bd6-4d5a114dbfa1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-h75rl" [69bde553-6d42-4a2f-9bd6-4d5a114dbfa1] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004001934s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 version --short
2025/12/16 04:33:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-423958 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-423958
localhost/kicbase/echo-server:functional-423958
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-423958 image ls --format short --alsologtostderr:
I1216 04:33:54.686846   48976 out.go:360] Setting OutFile to fd 1 ...
I1216 04:33:54.687188   48976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:54.687202   48976 out.go:374] Setting ErrFile to fd 2...
I1216 04:33:54.687209   48976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:54.687476   48976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:33:54.688235   48976 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:54.688369   48976 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:54.688994   48976 cli_runner.go:164] Run: docker container inspect functional-423958 --format={{.State.Status}}
I1216 04:33:54.710787   48976 ssh_runner.go:195] Run: systemctl --version
I1216 04:33:54.710863   48976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423958
I1216 04:33:54.732602   48976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-423958/id_rsa Username:docker}
I1216 04:33:54.829821   48976 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-423958 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-423958  │ 8f82e78717314 │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-423958  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-423958 image ls --format table --alsologtostderr:
I1216 04:33:55.060946   49326 out.go:360] Setting OutFile to fd 1 ...
I1216 04:33:55.061264   49326 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:55.061276   49326 out.go:374] Setting ErrFile to fd 2...
I1216 04:33:55.061280   49326 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:55.061447   49326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:33:55.062077   49326 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:55.062210   49326 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:55.062685   49326 cli_runner.go:164] Run: docker container inspect functional-423958 --format={{.State.Status}}
I1216 04:33:55.082975   49326 ssh_runner.go:195] Run: systemctl --version
I1216 04:33:55.083065   49326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423958
I1216 04:33:55.104602   49326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-423958/id_rsa Username:docker}
I1216 04:33:55.199627   49326 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-423958 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"8f82e7871731441a3b3a6d7dbb690a71ccd5bc156d413120609a7e5999657cee","repoDigests":["localhost/minikube-local-cache-test@sha256:5050bdaa89693d777a30f29a494657eab5f4d6ec9e5220b25637892b684a3ff5"],"repoTags":["localhost/minikube-local-cache-test:functional-423958"],"size":"3330"},{"id":"a236f
84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","
repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-423958"],"size":"4945146"},{"id":"52546a367cc9e0d
924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/k
ube-scheduler:v1.34.2"],"size":"53848919"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdf
ec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441
c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-423958 image ls --format json --alsologtostderr:
I1216 04:33:54.938530   49225 out.go:360] Setting OutFile to fd 1 ...
I1216 04:33:54.938634   49225 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:54.938645   49225 out.go:374] Setting ErrFile to fd 2...
I1216 04:33:54.938652   49225 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:54.938906   49225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:33:54.939536   49225 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:54.939655   49225 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:54.940159   49225 cli_runner.go:164] Run: docker container inspect functional-423958 --format={{.State.Status}}
I1216 04:33:54.958520   49225 ssh_runner.go:195] Run: systemctl --version
I1216 04:33:54.958579   49225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423958
I1216 04:33:54.978118   49225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-423958/id_rsa Username:docker}
I1216 04:33:55.072264   49225 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-423958 image ls --format yaml --alsologtostderr:
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-423958
size: "4945146"
- id: 8f82e7871731441a3b3a6d7dbb690a71ccd5bc156d413120609a7e5999657cee
repoDigests:
- localhost/minikube-local-cache-test@sha256:5050bdaa89693d777a30f29a494657eab5f4d6ec9e5220b25637892b684a3ff5
repoTags:
- localhost/minikube-local-cache-test:functional-423958
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-423958 image ls --format yaml --alsologtostderr:
I1216 04:33:54.822957   49117 out.go:360] Setting OutFile to fd 1 ...
I1216 04:33:54.823251   49117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:54.823266   49117 out.go:374] Setting ErrFile to fd 2...
I1216 04:33:54.823272   49117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:54.823556   49117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:33:54.824173   49117 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:54.824285   49117 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:54.824756   49117 cli_runner.go:164] Run: docker container inspect functional-423958 --format={{.State.Status}}
I1216 04:33:54.845450   49117 ssh_runner.go:195] Run: systemctl --version
I1216 04:33:54.845514   49117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423958
I1216 04:33:54.865746   49117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-423958/id_rsa Username:docker}
I1216 04:33:54.967615   49117 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh pgrep buildkitd: exit status 1 (293.862075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image build -t localhost/my-image:functional-423958 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-423958 image build -t localhost/my-image:functional-423958 testdata/build --alsologtostderr: (2.349248579s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-423958 image build -t localhost/my-image:functional-423958 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5d493627bb1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-423958
--> ad6d0433a3a
Successfully tagged localhost/my-image:functional-423958
ad6d0433a3a30069c35f0522f7fce33348f4b1bf04ae26e42c3959e798798a8c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-423958 image build -t localhost/my-image:functional-423958 testdata/build --alsologtostderr:
I1216 04:33:55.125052   49343 out.go:360] Setting OutFile to fd 1 ...
I1216 04:33:55.125245   49343 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:55.125257   49343 out.go:374] Setting ErrFile to fd 2...
I1216 04:33:55.125263   49343 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:33:55.125475   49343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:33:55.126066   49343 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:55.126786   49343 config.go:182] Loaded profile config "functional-423958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:33:55.127312   49343 cli_runner.go:164] Run: docker container inspect functional-423958 --format={{.State.Status}}
I1216 04:33:55.144989   49343 ssh_runner.go:195] Run: systemctl --version
I1216 04:33:55.145076   49343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423958
I1216 04:33:55.161967   49343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-423958/id_rsa Username:docker}
I1216 04:33:55.254337   49343 build_images.go:162] Building image from path: /tmp/build.1454500914.tar
I1216 04:33:55.254426   49343 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 04:33:55.262222   49343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1454500914.tar
I1216 04:33:55.265806   49343 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1454500914.tar: stat -c "%s %y" /var/lib/minikube/build/build.1454500914.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1454500914.tar': No such file or directory
I1216 04:33:55.265832   49343 ssh_runner.go:362] scp /tmp/build.1454500914.tar --> /var/lib/minikube/build/build.1454500914.tar (3072 bytes)
I1216 04:33:55.283129   49343 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1454500914
I1216 04:33:55.290356   49343 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1454500914 -xf /var/lib/minikube/build/build.1454500914.tar
I1216 04:33:55.298652   49343 crio.go:315] Building image: /var/lib/minikube/build/build.1454500914
I1216 04:33:55.298726   49343 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-423958 /var/lib/minikube/build/build.1454500914 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 04:33:57.385750   49343 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-423958 /var/lib/minikube/build/build.1454500914 --cgroup-manager=cgroupfs: (2.0869904s)
I1216 04:33:57.385817   49343 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1454500914
I1216 04:33:57.393810   49343 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1454500914.tar
I1216 04:33:57.401461   49343 build_images.go:218] Built localhost/my-image:functional-423958 from /tmp/build.1454500914.tar
I1216 04:33:57.401495   49343 build_images.go:134] succeeded building to: functional-423958
I1216 04:33:57.401503   49343 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-423958
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image load --daemon kicbase/echo-server:functional-423958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image load --daemon kicbase/echo-server:functional-423958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-423958
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image load --daemon kicbase/echo-server:functional-423958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image save kicbase/echo-server:functional-423958 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image rm kicbase/echo-server:functional-423958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-423958 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.420966937s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 service list -o json
functional_test.go:1504: Took "386.080625ms" to run "out/minikube-linux-amd64 -p functional-423958 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-423958
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 image save --daemon kicbase/echo-server:functional-423958 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-423958 image save --daemon kicbase/echo-server:functional-423958 --alsologtostderr: (3.758529881s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-423958
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32157
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32157
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "382.287469ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "83.27431ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdany-port42800064/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765859619079639381" to /tmp/TestFunctionalparallelMountCmdany-port42800064/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765859619079639381" to /tmp/TestFunctionalparallelMountCmdany-port42800064/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765859619079639381" to /tmp/TestFunctionalparallelMountCmdany-port42800064/001/test-1765859619079639381
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.487329ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:33:39.370446    8592 retry.go:31] will retry after 681.77491ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 04:33 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 04:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 04:33 test-1765859619079639381
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh cat /mount-9p/test-1765859619079639381
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-423958 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [0945b552-250d-46b3-9b3e-91992c8e7ec7] Pending
helpers_test.go:353: "busybox-mount" [0945b552-250d-46b3-9b3e-91992c8e7ec7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [0945b552-250d-46b3-9b3e-91992c8e7ec7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [0945b552-250d-46b3-9b3e-91992c8e7ec7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003012533s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-423958 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdany-port42800064/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.93s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "391.814096ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.232136ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-423958 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-423958 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-423958 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 45174: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-423958 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-423958 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (6.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-423958 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [e933cf4c-0189-4405-8194-aa32d79ef7b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [e933cf4c-0189-4405-8194-aa32d79ef7b8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 6.003465044s
I1216 04:33:46.530294    8592 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (6.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdspecific-port3705285738/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.737726ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:33:46.292954    8592 retry.go:31] will retry after 492.993901ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdspecific-port3705285738/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh "sudo umount -f /mount-9p": exit status 1 (281.733728ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-423958 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdspecific-port3705285738/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-423958 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.168.168 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-423958 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1768799997/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1768799997/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1768799997/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T" /mount1: exit status 1 (342.743215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:33:48.222055    8592 retry.go:31] will retry after 625.106178ms: exit status 1
I1216 04:33:48.296608    8592 detect.go:223] nested VM detected
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-423958 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-423958 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1768799997/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1768799997/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-423958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1768799997/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-423958
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-423958
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-423958
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22141-5066/.minikube/files/etc/test/nested/copy/8592/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (66.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767327 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 04:35:12.563862    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-767327 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m6.507189776s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (66.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (14.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 04:35:26.124483    8592 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767327 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-767327 --alsologtostderr -v=8: (14.645957095s)
functional_test.go:678: soft start took 14.646270536s for "functional-767327" cluster.
I1216 04:35:40.770761    8592 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (14.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-767327 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1226372964/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cache add minikube-local-cache-test:functional-767327
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cache delete minikube-local-cache-test:functional-767327
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-767327
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.438264ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 kubectl -- --context functional-767327 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-767327 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (66.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767327 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-767327 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.51835309s)
functional_test.go:776: restart took 1m6.518553784s for "functional-767327" cluster.
I1216 04:36:52.931672    8592 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (66.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-767327 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 logs: (1.17202302s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1881616029/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1881616029/001/logs.txt: (1.195547816s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-767327 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-767327
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-767327: exit status 115 (327.997029ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31975 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-767327 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 config get cpus: exit status 14 (79.908782ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 config get cpus: exit status 14 (90.321812ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-767327 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-767327 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 64607: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767327 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-767327 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (173.231404ms)

                                                
                                                
-- stdout --
	* [functional-767327] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:37:12.114542   63653 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:37:12.114775   63653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:12.114784   63653 out.go:374] Setting ErrFile to fd 2...
	I1216 04:37:12.114788   63653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:12.114990   63653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:37:12.115456   63653 out.go:368] Setting JSON to false
	I1216 04:37:12.116448   63653 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1183,"bootTime":1765858649,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:37:12.116502   63653 start.go:143] virtualization: kvm guest
	I1216 04:37:12.118433   63653 out.go:179] * [functional-767327] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:37:12.120117   63653 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:37:12.120140   63653 notify.go:221] Checking for updates...
	I1216 04:37:12.122687   63653 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:37:12.124062   63653 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:37:12.125395   63653 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:37:12.126656   63653 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:37:12.127909   63653 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:37:12.129451   63653 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:37:12.129914   63653 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:37:12.154629   63653 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:37:12.154775   63653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:37:12.218241   63653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 04:37:12.207350783 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:37:12.218369   63653 docker.go:319] overlay module found
	I1216 04:37:12.220054   63653 out.go:179] * Using the docker driver based on existing profile
	I1216 04:37:12.224455   63653 start.go:309] selected driver: docker
	I1216 04:37:12.224475   63653 start.go:927] validating driver "docker" against &{Name:functional-767327 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767327 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:37:12.224555   63653 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:37:12.226350   63653 out.go:203] 
	W1216 04:37:12.227611   63653 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 04:37:12.228820   63653 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767327 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767327 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-767327 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (183.933102ms)

                                                
                                                
-- stdout --
	* [functional-767327] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:37:12.523234   63981 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:37:12.523364   63981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:12.523376   63981 out.go:374] Setting ErrFile to fd 2...
	I1216 04:37:12.523382   63981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:12.523686   63981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:37:12.524170   63981 out.go:368] Setting JSON to false
	I1216 04:37:12.525137   63981 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1183,"bootTime":1765858649,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:37:12.525198   63981 start.go:143] virtualization: kvm guest
	I1216 04:37:12.527160   63981 out.go:179] * [functional-767327] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 04:37:12.528454   63981 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:37:12.528474   63981 notify.go:221] Checking for updates...
	I1216 04:37:12.530685   63981 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:37:12.531773   63981 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 04:37:12.532734   63981 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 04:37:12.537495   63981 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:37:12.538734   63981 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:37:12.540481   63981 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:37:12.541226   63981 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:37:12.569196   63981 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 04:37:12.569309   63981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:37:12.627948   63981 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-16 04:37:12.618009505 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:37:12.628070   63981 docker.go:319] overlay module found
	I1216 04:37:12.629774   63981 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 04:37:12.633467   63981 start.go:309] selected driver: docker
	I1216 04:37:12.633482   63981 start.go:927] validating driver "docker" against &{Name:functional-767327 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767327 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:37:12.633571   63981 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:37:12.635258   63981 out.go:203] 
	W1216 04:37:12.636319   63981 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 04:37:12.637414   63981 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-767327 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-767327 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-v7d8b" [3c5dcb61-eafb-4b22-aef0-b9239c06519d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-v7d8b" [3c5dcb61-eafb-4b22-aef0-b9239c06519d] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003516291s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31972
functional_test.go:1680: http://192.168.49.2:31972: success! body:
Request served by hello-node-connect-9f67c86d4-v7d8b

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31972
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (18.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [d93a11d3-8369-4def-b908-0bc0f817d3d3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003644585s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-767327 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-767327 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-767327 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-767327 apply -f testdata/storage-provisioner/pod.yaml
I1216 04:37:06.974977    8592 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [883a9270-b091-4913-aab9-bc0f2b7bf0fc] Pending
helpers_test.go:353: "sp-pod" [883a9270-b091-4913-aab9-bc0f2b7bf0fc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003222808s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-767327 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-767327 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-767327 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [27c45fc3-9169-4655-ac3b-a39d3aeb761a] Pending
helpers_test.go:353: "sp-pod" [27c45fc3-9169-4655-ac3b-a39d3aeb761a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.006543487s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-767327 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (18.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh -n functional-767327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cp functional-767327:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2223165384/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh -n functional-767327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh -n functional-767327 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-767327 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-tc5ld" [b3094ba0-6861-4c07-97e5-b515986a0c87] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-tc5ld" [b3094ba0-6861-4c07-97e5-b515986a0c87] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 16.004182016s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-767327 exec mysql-7d7b65bc95-tc5ld -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-767327 exec mysql-7d7b65bc95-tc5ld -- mysql -ppassword -e "show databases;": exit status 1 (88.126459ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:37:32.120762    8592 retry.go:31] will retry after 562.898193ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-767327 exec mysql-7d7b65bc95-tc5ld -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-767327 exec mysql-7d7b65bc95-tc5ld -- mysql -ppassword -e "show databases;": exit status 1 (117.792179ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:37:32.802514    8592 retry.go:31] will retry after 1.746371367s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-767327 exec mysql-7d7b65bc95-tc5ld -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-767327 exec mysql-7d7b65bc95-tc5ld -- mysql -ppassword -e "show databases;": exit status 1 (88.474184ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:37:34.638906    8592 retry.go:31] will retry after 3.168950501s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-767327 exec mysql-7d7b65bc95-tc5ld -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8592/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo cat /etc/test/nested/copy/8592/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo cat /etc/ssl/certs/8592.pem"
I1216 04:37:13.903633    8592 detect.go:223] nested VM detected
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo cat /usr/share/ca-certificates/8592.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/85922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo cat /etc/ssl/certs/85922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/85922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo cat /usr/share/ca-certificates/85922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-767327 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh "sudo systemctl is-active docker": exit status 1 (311.697518ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh "sudo systemctl is-active containerd": exit status 1 (323.392175ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh pgrep buildkitd: exit status 1 (338.384355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image build -t localhost/my-image:functional-767327 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 image build -t localhost/my-image:functional-767327 testdata/build --alsologtostderr: (5.355076953s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767327 image build -t localhost/my-image:functional-767327 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0cd44c6f6fb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-767327
--> 59784cdbbe7
Successfully tagged localhost/my-image:functional-767327
59784cdbbe7e0c436b248ac2e470aca5bc1347c6b124a23574ccc446269893b5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767327 image build -t localhost/my-image:functional-767327 testdata/build --alsologtostderr:
I1216 04:37:22.724566   67953 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:22.724852   67953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:22.724865   67953 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:22.724873   67953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:22.725149   67953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
I1216 04:37:22.725849   67953 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:22.726602   67953 config.go:182] Loaded profile config "functional-767327": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:37:22.727245   67953 cli_runner.go:164] Run: docker container inspect functional-767327 --format={{.State.Status}}
I1216 04:37:22.747696   67953 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:22.747752   67953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767327
I1216 04:37:22.771239   67953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/functional-767327/id_rsa Username:docker}
I1216 04:37:22.872707   67953 build_images.go:162] Building image from path: /tmp/build.2787914061.tar
I1216 04:37:22.872780   67953 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 04:37:22.883303   67953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2787914061.tar
I1216 04:37:22.887970   67953 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2787914061.tar: stat -c "%s %y" /var/lib/minikube/build/build.2787914061.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2787914061.tar': No such file or directory
I1216 04:37:22.888004   67953 ssh_runner.go:362] scp /tmp/build.2787914061.tar --> /var/lib/minikube/build/build.2787914061.tar (3072 bytes)
I1216 04:37:22.911258   67953 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2787914061
I1216 04:37:22.920397   67953 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2787914061 -xf /var/lib/minikube/build/build.2787914061.tar
I1216 04:37:22.930274   67953 crio.go:315] Building image: /var/lib/minikube/build/build.2787914061
I1216 04:37:22.930383   67953 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-767327 /var/lib/minikube/build/build.2787914061 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 04:37:27.984873   67953 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-767327 /var/lib/minikube/build/build.2787914061 --cgroup-manager=cgroupfs: (5.054454983s)
I1216 04:37:27.984930   67953 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2787914061
I1216 04:37:27.993085   67953 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2787914061.tar
I1216 04:37:28.000223   67953 build_images.go:218] Built localhost/my-image:functional-767327 from /tmp/build.2787914061.tar
I1216 04:37:28.000256   67953 build_images.go:134] succeeded building to: functional-767327
I1216 04:37:28.000262   67953 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls
E1216 04:37:28.699880    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-767327
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image load --daemon kicbase/echo-server:functional-767327 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-767327 image load --daemon kicbase/echo-server:functional-767327 --alsologtostderr: (1.008290026s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-767327 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-767327 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-4c9fz" [54939bc0-72dd-45a9-a8f8-da2354f453e4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-4c9fz" [54939bc0-72dd-45a9-a8f8-da2354f453e4] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.002999704s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-767327 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-767327 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-767327 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-767327 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 60344: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-767327 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-767327 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [0d5d6de5-842f-431e-95af-8806a4be7516] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [0d5d6de5-842f-431e-95af-8806a4be7516] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004077792s
I1216 04:37:10.642977    8592 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image load --daemon kicbase/echo-server:functional-767327 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-767327
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image load --daemon kicbase/echo-server:functional-767327 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image save kicbase/echo-server:functional-767327 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image rm kicbase/echo-server:functional-767327 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-767327
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 image save --daemon kicbase/echo-server:functional-767327 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-767327
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 service list -o json
functional_test.go:1504: Took "487.841254ms" to run "out/minikube-linux-amd64 -p functional-767327 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30626
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30626
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-767327 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.223.252 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-767327 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "364.553665ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.058533ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo154432468/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765859831404656412" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo154432468/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765859831404656412" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo154432468/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765859831404656412" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo154432468/001/test-1765859831404656412
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.03155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:37:11.698037    8592 retry.go:31] will retry after 456.514923ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 04:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 04:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 04:37 test-1765859831404656412
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh cat /mount-9p/test-1765859831404656412
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-767327 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c3580f3f-4442-4915-8726-7bce9488d157] Pending
helpers_test.go:353: "busybox-mount" [c3580f3f-4442-4915-8726-7bce9488d157] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c3580f3f-4442-4915-8726-7bce9488d157] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c3580f3f-4442-4915-8726-7bce9488d157] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004433318s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-767327 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo154432468/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "325.787245ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.719326ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3431609231/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.63733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:37:18.608183    8592 retry.go:31] will retry after 304.16651ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T /mount-9p | grep 9p"
2025/12/16 04:37:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3431609231/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh "sudo umount -f /mount-9p": exit status 1 (382.27184ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-767327 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3431609231/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3830521855/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3830521855/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3830521855/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T" /mount1: exit status 1 (439.26525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:37:20.726980    8592 retry.go:31] will retry after 362.409831ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767327 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-767327 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3830521855/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3830521855/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767327 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3830521855/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-767327
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-767327
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-767327
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (139.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 04:37:56.405788    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:27.389227    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:27.395593    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:27.406946    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:27.428289    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:27.469733    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:27.551134    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:27.712656    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:28.035090    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:28.677097    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:29.958831    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:32.520938    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:37.643037    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:47.884966    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:39:08.366382    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:39:49.328160    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m18.419266591s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (139.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 kubectl -- rollout status deployment/busybox: (2.472680402s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-l99nx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-nr5wr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-qw5hp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-l99nx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-nr5wr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-qw5hp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-l99nx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-nr5wr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-qw5hp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-l99nx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-l99nx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-nr5wr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-nr5wr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-qw5hp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 kubectl -- exec busybox-7b57f96db7-qw5hp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 node add --alsologtostderr -v 5: (55.57888244s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-176274 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp testdata/cp-test.txt ha-176274:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile487938603/001/cp-test_ha-176274.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274:/home/docker/cp-test.txt ha-176274-m02:/home/docker/cp-test_ha-176274_ha-176274-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test_ha-176274_ha-176274-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274:/home/docker/cp-test.txt ha-176274-m03:/home/docker/cp-test_ha-176274_ha-176274-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test_ha-176274_ha-176274-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274:/home/docker/cp-test.txt ha-176274-m04:/home/docker/cp-test_ha-176274_ha-176274-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test_ha-176274_ha-176274-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp testdata/cp-test.txt ha-176274-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile487938603/001/cp-test_ha-176274-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m02:/home/docker/cp-test.txt ha-176274:/home/docker/cp-test_ha-176274-m02_ha-176274.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test_ha-176274-m02_ha-176274.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m02:/home/docker/cp-test.txt ha-176274-m03:/home/docker/cp-test_ha-176274-m02_ha-176274-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test_ha-176274-m02_ha-176274-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m02:/home/docker/cp-test.txt ha-176274-m04:/home/docker/cp-test_ha-176274-m02_ha-176274-m04.txt
E1216 04:41:11.250386    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test_ha-176274-m02_ha-176274-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp testdata/cp-test.txt ha-176274-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile487938603/001/cp-test_ha-176274-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m03:/home/docker/cp-test.txt ha-176274:/home/docker/cp-test_ha-176274-m03_ha-176274.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test_ha-176274-m03_ha-176274.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m03:/home/docker/cp-test.txt ha-176274-m02:/home/docker/cp-test_ha-176274-m03_ha-176274-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test_ha-176274-m03_ha-176274-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m03:/home/docker/cp-test.txt ha-176274-m04:/home/docker/cp-test_ha-176274-m03_ha-176274-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test_ha-176274-m03_ha-176274-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp testdata/cp-test.txt ha-176274-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile487938603/001/cp-test_ha-176274-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m04:/home/docker/cp-test.txt ha-176274:/home/docker/cp-test_ha-176274-m04_ha-176274.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274 "sudo cat /home/docker/cp-test_ha-176274-m04_ha-176274.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m04:/home/docker/cp-test.txt ha-176274-m02:/home/docker/cp-test_ha-176274-m04_ha-176274-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m02 "sudo cat /home/docker/cp-test_ha-176274-m04_ha-176274-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 cp ha-176274-m04:/home/docker/cp-test.txt ha-176274-m03:/home/docker/cp-test_ha-176274-m04_ha-176274-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 ssh -n ha-176274-m03 "sudo cat /home/docker/cp-test_ha-176274-m04_ha-176274-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 node stop m02 --alsologtostderr -v 5: (13.101800964s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5: exit status 7 (672.851461ms)

                                                
                                                
-- stdout --
	ha-176274
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-176274-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-176274-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-176274-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:41:33.185880   88518 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:41:33.186130   88518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:41:33.186139   88518 out.go:374] Setting ErrFile to fd 2...
	I1216 04:41:33.186143   88518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:41:33.186314   88518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:41:33.186472   88518 out.go:368] Setting JSON to false
	I1216 04:41:33.186496   88518 mustload.go:66] Loading cluster: ha-176274
	I1216 04:41:33.186626   88518 notify.go:221] Checking for updates...
	I1216 04:41:33.186829   88518 config.go:182] Loaded profile config "ha-176274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:41:33.186842   88518 status.go:174] checking status of ha-176274 ...
	I1216 04:41:33.187286   88518 cli_runner.go:164] Run: docker container inspect ha-176274 --format={{.State.Status}}
	I1216 04:41:33.206523   88518 status.go:371] ha-176274 host status = "Running" (err=<nil>)
	I1216 04:41:33.206564   88518 host.go:66] Checking if "ha-176274" exists ...
	I1216 04:41:33.206834   88518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-176274
	I1216 04:41:33.225116   88518 host.go:66] Checking if "ha-176274" exists ...
	I1216 04:41:33.225486   88518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:41:33.225544   88518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-176274
	I1216 04:41:33.243565   88518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/ha-176274/id_rsa Username:docker}
	I1216 04:41:33.333328   88518 ssh_runner.go:195] Run: systemctl --version
	I1216 04:41:33.339650   88518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:41:33.351666   88518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:41:33.408954   88518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-16 04:41:33.397259993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:41:33.409549   88518 kubeconfig.go:125] found "ha-176274" server: "https://192.168.49.254:8443"
	I1216 04:41:33.409579   88518 api_server.go:166] Checking apiserver status ...
	I1216 04:41:33.409625   88518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:41:33.421587   88518 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	W1216 04:41:33.430052   88518 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:41:33.430103   88518 ssh_runner.go:195] Run: ls
	I1216 04:41:33.434130   88518 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 04:41:33.440520   88518 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 04:41:33.440556   88518 status.go:463] ha-176274 apiserver status = Running (err=<nil>)
	I1216 04:41:33.440569   88518 status.go:176] ha-176274 status: &{Name:ha-176274 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:41:33.440593   88518 status.go:174] checking status of ha-176274-m02 ...
	I1216 04:41:33.440829   88518 cli_runner.go:164] Run: docker container inspect ha-176274-m02 --format={{.State.Status}}
	I1216 04:41:33.459399   88518 status.go:371] ha-176274-m02 host status = "Stopped" (err=<nil>)
	I1216 04:41:33.459416   88518 status.go:384] host is not running, skipping remaining checks
	I1216 04:41:33.459421   88518 status.go:176] ha-176274-m02 status: &{Name:ha-176274-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:41:33.459436   88518 status.go:174] checking status of ha-176274-m03 ...
	I1216 04:41:33.459670   88518 cli_runner.go:164] Run: docker container inspect ha-176274-m03 --format={{.State.Status}}
	I1216 04:41:33.476718   88518 status.go:371] ha-176274-m03 host status = "Running" (err=<nil>)
	I1216 04:41:33.476745   88518 host.go:66] Checking if "ha-176274-m03" exists ...
	I1216 04:41:33.477096   88518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-176274-m03
	I1216 04:41:33.493439   88518 host.go:66] Checking if "ha-176274-m03" exists ...
	I1216 04:41:33.493693   88518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:41:33.493739   88518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-176274-m03
	I1216 04:41:33.511089   88518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32801 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/ha-176274-m03/id_rsa Username:docker}
	I1216 04:41:33.600288   88518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:41:33.612455   88518 kubeconfig.go:125] found "ha-176274" server: "https://192.168.49.254:8443"
	I1216 04:41:33.612478   88518 api_server.go:166] Checking apiserver status ...
	I1216 04:41:33.612507   88518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:41:33.622782   88518 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W1216 04:41:33.630822   88518 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:41:33.630879   88518 ssh_runner.go:195] Run: ls
	I1216 04:41:33.634667   88518 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 04:41:33.638967   88518 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 04:41:33.638989   88518 status.go:463] ha-176274-m03 apiserver status = Running (err=<nil>)
	I1216 04:41:33.638999   88518 status.go:176] ha-176274-m03 status: &{Name:ha-176274-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:41:33.639031   88518 status.go:174] checking status of ha-176274-m04 ...
	I1216 04:41:33.639328   88518 cli_runner.go:164] Run: docker container inspect ha-176274-m04 --format={{.State.Status}}
	I1216 04:41:33.657667   88518 status.go:371] ha-176274-m04 host status = "Running" (err=<nil>)
	I1216 04:41:33.657693   88518 host.go:66] Checking if "ha-176274-m04" exists ...
	I1216 04:41:33.657955   88518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-176274-m04
	I1216 04:41:33.675653   88518 host.go:66] Checking if "ha-176274-m04" exists ...
	I1216 04:41:33.675953   88518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:41:33.675990   88518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-176274-m04
	I1216 04:41:33.696315   88518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32806 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/ha-176274-m04/id_rsa Username:docker}
	I1216 04:41:33.787189   88518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:41:33.800160   88518 status.go:176] ha-176274-m04 status: &{Name:ha-176274-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 node start m02 --alsologtostderr -v 5: (7.536115124s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 stop --alsologtostderr -v 5
E1216 04:42:00.446137    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:00.452533    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:00.463866    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:00.485311    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:00.527581    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:00.609002    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:00.770584    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:01.092329    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:01.734235    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:03.015826    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:05.578173    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:10.699492    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:20.941107    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 stop --alsologtostderr -v 5: (43.557582285s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 start --wait true --alsologtostderr -v 5
E1216 04:42:28.699247    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:41.423070    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:43:22.385239    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 start --wait true --alsologtostderr -v 5: (58.69597443s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 node delete m03 --alsologtostderr -v 5
E1216 04:43:27.388012    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 node delete m03 --alsologtostderr -v 5: (9.707062178s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 stop --alsologtostderr -v 5
E1216 04:43:55.094177    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 stop --alsologtostderr -v 5: (48.373184392s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5: exit status 7 (113.61941ms)

                                                
                                                
-- stdout --
	ha-176274
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-176274-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-176274-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:44:25.745427  102852 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:44:25.745683  102852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:44:25.745692  102852 out.go:374] Setting ErrFile to fd 2...
	I1216 04:44:25.745696  102852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:44:25.745902  102852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:44:25.746102  102852 out.go:368] Setting JSON to false
	I1216 04:44:25.746126  102852 mustload.go:66] Loading cluster: ha-176274
	I1216 04:44:25.746197  102852 notify.go:221] Checking for updates...
	I1216 04:44:25.746520  102852 config.go:182] Loaded profile config "ha-176274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:44:25.746538  102852 status.go:174] checking status of ha-176274 ...
	I1216 04:44:25.747073  102852 cli_runner.go:164] Run: docker container inspect ha-176274 --format={{.State.Status}}
	I1216 04:44:25.765420  102852 status.go:371] ha-176274 host status = "Stopped" (err=<nil>)
	I1216 04:44:25.765470  102852 status.go:384] host is not running, skipping remaining checks
	I1216 04:44:25.765489  102852 status.go:176] ha-176274 status: &{Name:ha-176274 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:44:25.765536  102852 status.go:174] checking status of ha-176274-m02 ...
	I1216 04:44:25.765881  102852 cli_runner.go:164] Run: docker container inspect ha-176274-m02 --format={{.State.Status}}
	I1216 04:44:25.784207  102852 status.go:371] ha-176274-m02 host status = "Stopped" (err=<nil>)
	I1216 04:44:25.784255  102852 status.go:384] host is not running, skipping remaining checks
	I1216 04:44:25.784266  102852 status.go:176] ha-176274-m02 status: &{Name:ha-176274-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:44:25.784290  102852 status.go:174] checking status of ha-176274-m04 ...
	I1216 04:44:25.784604  102852 cli_runner.go:164] Run: docker container inspect ha-176274-m04 --format={{.State.Status}}
	I1216 04:44:25.801388  102852 status.go:371] ha-176274-m04 host status = "Stopped" (err=<nil>)
	I1216 04:44:25.801415  102852 status.go:384] host is not running, skipping remaining checks
	I1216 04:44:25.801424  102852 status.go:176] ha-176274-m04 status: &{Name:ha-176274-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 04:44:44.306899    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.536514548s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-176274 node add --control-plane --alsologtostderr -v 5: (38.824289815s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-176274 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-484990 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-484990 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.741171869s)
--- PASS: TestJSONOutput/start/Command (39.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-484990 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-484990 --output=json --user=testUser: (6.17875062s)
--- PASS: TestJSONOutput/stop/Command (6.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-772711 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-772711 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.327634ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d66e647d-9838-428c-b3d8-6719ea7ce389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-772711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b231ac2-6c23-4cf3-a3e8-83cefef24c27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22141"}}
	{"specversion":"1.0","id":"4e91696f-cbad-4dfd-87e2-a9e0701b3b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b26f4df4-4082-43ce-8d67-3d03d5d0bdfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig"}}
	{"specversion":"1.0","id":"71585f31-2319-4ada-b5ff-e644f1a53900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube"}}
	{"specversion":"1.0","id":"73976e5d-dfac-4239-8062-fad019d1c290","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"21fc26fe-f7fd-4a44-949d-336b1a2d4103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"278a0f9c-c8e3-406d-9c10-dfe2fbd4aab5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-772711" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-772711
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-598074 --network=
E1216 04:47:28.149253    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-598074 --network=: (23.176911383s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-598074" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-598074
E1216 04:47:28.699549    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-598074: (2.135648327s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-347978 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-347978 --network=bridge: (19.449040275s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-347978" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-347978
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-347978: (2.005821246s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.47s)

                                                
                                    
x
+
TestKicExistingNetwork (25.53s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 04:47:52.217220    8592 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 04:47:52.233592    8592 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 04:47:52.233670    8592 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 04:47:52.233687    8592 cli_runner.go:164] Run: docker network inspect existing-network
W1216 04:47:52.249093    8592 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 04:47:52.249121    8592 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 04:47:52.249150    8592 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 04:47:52.249296    8592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 04:47:52.265259    8592 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d72ba881e355 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:5d:91:95:af:e9} reservation:<nil>}
I1216 04:47:52.265656    8592 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e99250}
I1216 04:47:52.265683    8592 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 04:47:52.265726    8592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 04:47:52.311338    8592 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-362472 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-362472 --network=existing-network: (23.389167841s)
helpers_test.go:176: Cleaning up "existing-network-362472" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-362472
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-362472: (2.015998754s)
I1216 04:48:17.733093    8592 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.53s)

                                                
                                    
x
+
TestKicCustomSubnet (26s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-386339 --subnet=192.168.60.0/24
E1216 04:48:27.393210    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-386339 --subnet=192.168.60.0/24: (23.851534292s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-386339 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-386339" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-386339
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-386339: (2.12335055s)
--- PASS: TestKicCustomSubnet (26.00s)

                                                
                                    
x
+
TestKicStaticIP (25.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-548827 --static-ip=192.168.200.200
E1216 04:48:51.767853    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-548827 --static-ip=192.168.200.200: (23.198281265s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-548827 ip
helpers_test.go:176: Cleaning up "static-ip-548827" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-548827
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-548827: (2.126674054s)
--- PASS: TestKicStaticIP (25.48s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-339094 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-339094 --driver=docker  --container-runtime=crio: (19.455905608s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-342116 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-342116 --driver=docker  --container-runtime=crio: (22.163578072s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-339094
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-342116
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-342116" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-342116
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-342116: (2.34667838s)
helpers_test.go:176: Cleaning up "first-339094" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-339094
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-339094: (2.370265748s)
--- PASS: TestMinikubeProfile (47.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-294092 --memory=3072 --mount-string /tmp/TestMountStartserial2779050357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-294092 --memory=3072 --mount-string /tmp/TestMountStartserial2779050357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.702549484s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-294092 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-304949 --memory=3072 --mount-string /tmp/TestMountStartserial2779050357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-304949 --memory=3072 --mount-string /tmp/TestMountStartserial2779050357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.653350241s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-304949 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-294092 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-294092 --alsologtostderr -v=5: (1.687517487s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-304949 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-304949
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-304949: (1.258871899s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-304949
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-304949: (6.201986291s)
--- PASS: TestMountStart/serial/RestartStopped (7.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-304949 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777550 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777550 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.094084662s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-777550 -- rollout status deployment/busybox: (1.988802871s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-f272r -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-ss2s6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-f272r -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-ss2s6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-f272r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-ss2s6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-f272r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-f272r -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-ss2s6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777550 -- exec busybox-7b57f96db7-ss2s6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-777550 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-777550 -v=5 --alsologtostderr: (22.251180735s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-777550 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp testdata/cp-test.txt multinode-777550:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1793891032/001/cp-test_multinode-777550.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550:/home/docker/cp-test.txt multinode-777550-m02:/home/docker/cp-test_multinode-777550_multinode-777550-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m02 "sudo cat /home/docker/cp-test_multinode-777550_multinode-777550-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550:/home/docker/cp-test.txt multinode-777550-m03:/home/docker/cp-test_multinode-777550_multinode-777550-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m03 "sudo cat /home/docker/cp-test_multinode-777550_multinode-777550-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp testdata/cp-test.txt multinode-777550-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1793891032/001/cp-test_multinode-777550-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550-m02:/home/docker/cp-test.txt multinode-777550:/home/docker/cp-test_multinode-777550-m02_multinode-777550.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550 "sudo cat /home/docker/cp-test_multinode-777550-m02_multinode-777550.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550-m02:/home/docker/cp-test.txt multinode-777550-m03:/home/docker/cp-test_multinode-777550-m02_multinode-777550-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m03 "sudo cat /home/docker/cp-test_multinode-777550-m02_multinode-777550-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp testdata/cp-test.txt multinode-777550-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1793891032/001/cp-test_multinode-777550-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550-m03:/home/docker/cp-test.txt multinode-777550:/home/docker/cp-test_multinode-777550-m03_multinode-777550.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550 "sudo cat /home/docker/cp-test_multinode-777550-m03_multinode-777550.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 cp multinode-777550-m03:/home/docker/cp-test.txt multinode-777550-m02:/home/docker/cp-test_multinode-777550-m03_multinode-777550-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 ssh -n multinode-777550-m02 "sudo cat /home/docker/cp-test_multinode-777550-m03_multinode-777550-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 node stop m03
E1216 04:52:00.445960    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-777550 node stop m03: (1.260825772s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777550 status: exit status 7 (473.270179ms)

                                                
                                                
-- stdout --
	multinode-777550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-777550-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-777550-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr: exit status 7 (476.151046ms)

                                                
                                                
-- stdout --
	multinode-777550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-777550-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-777550-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:52:02.061089  162597 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:52:02.061208  162597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:52:02.061215  162597 out.go:374] Setting ErrFile to fd 2...
	I1216 04:52:02.061222  162597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:52:02.061429  162597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:52:02.061602  162597 out.go:368] Setting JSON to false
	I1216 04:52:02.061625  162597 mustload.go:66] Loading cluster: multinode-777550
	I1216 04:52:02.061730  162597 notify.go:221] Checking for updates...
	I1216 04:52:02.061980  162597 config.go:182] Loaded profile config "multinode-777550": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:52:02.061995  162597 status.go:174] checking status of multinode-777550 ...
	I1216 04:52:02.062417  162597 cli_runner.go:164] Run: docker container inspect multinode-777550 --format={{.State.Status}}
	I1216 04:52:02.081440  162597 status.go:371] multinode-777550 host status = "Running" (err=<nil>)
	I1216 04:52:02.081463  162597 host.go:66] Checking if "multinode-777550" exists ...
	I1216 04:52:02.081713  162597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-777550
	I1216 04:52:02.098830  162597 host.go:66] Checking if "multinode-777550" exists ...
	I1216 04:52:02.099149  162597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:52:02.099223  162597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-777550
	I1216 04:52:02.117098  162597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32911 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/multinode-777550/id_rsa Username:docker}
	I1216 04:52:02.207250  162597 ssh_runner.go:195] Run: systemctl --version
	I1216 04:52:02.213204  162597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:52:02.224802  162597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:52:02.277785  162597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-16 04:52:02.268432439 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 04:52:02.278374  162597 kubeconfig.go:125] found "multinode-777550" server: "https://192.168.67.2:8443"
	I1216 04:52:02.278406  162597 api_server.go:166] Checking apiserver status ...
	I1216 04:52:02.278439  162597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:52:02.289644  162597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup
	W1216 04:52:02.297712  162597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:52:02.297767  162597 ssh_runner.go:195] Run: ls
	I1216 04:52:02.301156  162597 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 04:52:02.305355  162597 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 04:52:02.305382  162597 status.go:463] multinode-777550 apiserver status = Running (err=<nil>)
	I1216 04:52:02.305395  162597 status.go:176] multinode-777550 status: &{Name:multinode-777550 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:52:02.305413  162597 status.go:174] checking status of multinode-777550-m02 ...
	I1216 04:52:02.305707  162597 cli_runner.go:164] Run: docker container inspect multinode-777550-m02 --format={{.State.Status}}
	I1216 04:52:02.324031  162597 status.go:371] multinode-777550-m02 host status = "Running" (err=<nil>)
	I1216 04:52:02.324054  162597 host.go:66] Checking if "multinode-777550-m02" exists ...
	I1216 04:52:02.324298  162597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-777550-m02
	I1216 04:52:02.341042  162597 host.go:66] Checking if "multinode-777550-m02" exists ...
	I1216 04:52:02.341320  162597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:52:02.341377  162597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-777550-m02
	I1216 04:52:02.358323  162597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32916 SSHKeyPath:/home/jenkins/minikube-integration/22141-5066/.minikube/machines/multinode-777550-m02/id_rsa Username:docker}
	I1216 04:52:02.446993  162597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:52:02.458762  162597 status.go:176] multinode-777550-m02 status: &{Name:multinode-777550-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:52:02.458792  162597 status.go:174] checking status of multinode-777550-m03 ...
	I1216 04:52:02.459078  162597 cli_runner.go:164] Run: docker container inspect multinode-777550-m03 --format={{.State.Status}}
	I1216 04:52:02.476581  162597 status.go:371] multinode-777550-m03 host status = "Stopped" (err=<nil>)
	I1216 04:52:02.476602  162597 status.go:384] host is not running, skipping remaining checks
	I1216 04:52:02.476609  162597 status.go:176] multinode-777550-m03 status: &{Name:multinode-777550-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-777550 node start m03 -v=5 --alsologtostderr: (6.436825669s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-777550
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-777550
E1216 04:52:28.699073    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-777550: (31.372196113s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777550 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777550 --wait=true -v=5 --alsologtostderr: (46.10891702s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-777550
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 node delete m03
E1216 04:53:27.387585    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-777550 node delete m03: (4.617822599s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-777550 stop: (28.272386223s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777550 status: exit status 7 (99.006092ms)

                                                
                                                
-- stdout --
	multinode-777550
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-777550-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr: exit status 7 (99.592404ms)

                                                
                                                
-- stdout --
	multinode-777550
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-777550-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:54:00.825754  172431 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:54:00.826043  172431 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:54:00.826054  172431 out.go:374] Setting ErrFile to fd 2...
	I1216 04:54:00.826060  172431 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:54:00.826285  172431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:54:00.826466  172431 out.go:368] Setting JSON to false
	I1216 04:54:00.826495  172431 mustload.go:66] Loading cluster: multinode-777550
	I1216 04:54:00.826609  172431 notify.go:221] Checking for updates...
	I1216 04:54:00.826897  172431 config.go:182] Loaded profile config "multinode-777550": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:54:00.826914  172431 status.go:174] checking status of multinode-777550 ...
	I1216 04:54:00.827390  172431 cli_runner.go:164] Run: docker container inspect multinode-777550 --format={{.State.Status}}
	I1216 04:54:00.845908  172431 status.go:371] multinode-777550 host status = "Stopped" (err=<nil>)
	I1216 04:54:00.845926  172431 status.go:384] host is not running, skipping remaining checks
	I1216 04:54:00.845932  172431 status.go:176] multinode-777550 status: &{Name:multinode-777550 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:54:00.845973  172431 status.go:174] checking status of multinode-777550-m02 ...
	I1216 04:54:00.846224  172431 cli_runner.go:164] Run: docker container inspect multinode-777550-m02 --format={{.State.Status}}
	I1216 04:54:00.864961  172431 status.go:371] multinode-777550-m02 host status = "Stopped" (err=<nil>)
	I1216 04:54:00.865010  172431 status.go:384] host is not running, skipping remaining checks
	I1216 04:54:00.865029  172431 status.go:176] multinode-777550-m02 status: &{Name:multinode-777550-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777550 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777550 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.401743376s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777550 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-777550
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777550-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-777550-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.409445ms)

                                                
                                                
-- stdout --
	* [multinode-777550-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-777550-m02' is duplicated with machine name 'multinode-777550-m02' in profile 'multinode-777550'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777550-m03 --driver=docker  --container-runtime=crio
E1216 04:54:50.456175    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777550-m03 --driver=docker  --container-runtime=crio: (22.068267313s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-777550
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-777550: exit status 80 (290.386744ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-777550 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-777550-m03 already exists in multinode-777550-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-777550-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-777550-m03: (2.38128374s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.87s)

                                                
                                    
x
+
TestPreload (81.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-725894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-725894 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (45.011331547s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-725894 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-725894 image pull gcr.io/k8s-minikube/busybox: (1.697863505s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-725894
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-725894: (6.163399669s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-725894 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-725894 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (25.67913487s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-725894 image list
helpers_test.go:176: Cleaning up "test-preload-725894" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-725894
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-725894: (2.383258267s)
--- PASS: TestPreload (81.17s)

                                                
                                    
x
+
TestScheduledStopUnix (97.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-694840 --memory=3072 --driver=docker  --container-runtime=crio
E1216 04:57:00.445849    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-694840 --memory=3072 --driver=docker  --container-runtime=crio: (22.146682423s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-694840 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 04:57:02.321534  189440 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:57:02.321799  189440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:02.321809  189440 out.go:374] Setting ErrFile to fd 2...
	I1216 04:57:02.321815  189440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:02.322034  189440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:57:02.322289  189440 out.go:368] Setting JSON to false
	I1216 04:57:02.322402  189440 mustload.go:66] Loading cluster: scheduled-stop-694840
	I1216 04:57:02.322737  189440 config.go:182] Loaded profile config "scheduled-stop-694840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:57:02.322821  189440 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/config.json ...
	I1216 04:57:02.323014  189440 mustload.go:66] Loading cluster: scheduled-stop-694840
	I1216 04:57:02.323151  189440 config.go:182] Loaded profile config "scheduled-stop-694840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-694840 -n scheduled-stop-694840
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 04:57:02.705152  189594 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:57:02.705252  189594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:02.705264  189594 out.go:374] Setting ErrFile to fd 2...
	I1216 04:57:02.705270  189594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:02.705493  189594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:57:02.705726  189594 out.go:368] Setting JSON to false
	I1216 04:57:02.705901  189594 daemonize_unix.go:73] killing process 189475 as it is an old scheduled stop
	I1216 04:57:02.706007  189594 mustload.go:66] Loading cluster: scheduled-stop-694840
	I1216 04:57:02.706469  189594 config.go:182] Loaded profile config "scheduled-stop-694840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:57:02.706556  189594 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/config.json ...
	I1216 04:57:02.706769  189594 mustload.go:66] Loading cluster: scheduled-stop-694840
	I1216 04:57:02.706891  189594 config.go:182] Loaded profile config "scheduled-stop-694840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1216 04:57:02.711970    8592 retry.go:31] will retry after 104.352µs: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.713151    8592 retry.go:31] will retry after 180.536µs: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.714262    8592 retry.go:31] will retry after 223.505µs: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.715384    8592 retry.go:31] will retry after 400.704µs: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.716534    8592 retry.go:31] will retry after 530.275µs: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.717643    8592 retry.go:31] will retry after 679.045µs: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.718779    8592 retry.go:31] will retry after 1.001577ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.719904    8592 retry.go:31] will retry after 1.946941ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.722143    8592 retry.go:31] will retry after 3.817091ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.726341    8592 retry.go:31] will retry after 4.210402ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.731531    8592 retry.go:31] will retry after 3.339142ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.735758    8592 retry.go:31] will retry after 8.547217ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.745006    8592 retry.go:31] will retry after 17.52091ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.763274    8592 retry.go:31] will retry after 10.013786ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.773538    8592 retry.go:31] will retry after 16.66113ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
I1216 04:57:02.790802    8592 retry.go:31] will retry after 56.455642ms: open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-694840 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-694840 -n scheduled-stop-694840
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-694840
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-694840 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 04:57:28.587530  190249 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:57:28.587803  190249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:28.587813  190249 out.go:374] Setting ErrFile to fd 2...
	I1216 04:57:28.587817  190249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:28.588082  190249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 04:57:28.588381  190249 out.go:368] Setting JSON to false
	I1216 04:57:28.588473  190249 mustload.go:66] Loading cluster: scheduled-stop-694840
	I1216 04:57:28.588802  190249 config.go:182] Loaded profile config "scheduled-stop-694840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:57:28.588882  190249 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/scheduled-stop-694840/config.json ...
	I1216 04:57:28.589140  190249 mustload.go:66] Loading cluster: scheduled-stop-694840
	I1216 04:57:28.589251  190249 config.go:182] Loaded profile config "scheduled-stop-694840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1216 04:57:28.699988    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-694840
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-694840: exit status 7 (80.321901ms)

                                                
                                                
-- stdout --
	scheduled-stop-694840
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-694840 -n scheduled-stop-694840
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-694840 -n scheduled-stop-694840: exit status 7 (78.452742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-694840" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-694840
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-694840: (4.226463264s)
--- PASS: TestScheduledStopUnix (97.87s)

                                                
                                    
x
+
TestInsufficientStorage (11.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-446584 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1216 04:58:23.513302    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-446584 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.174987319s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a2fd9ba6-2d7b-4c59-8190-a20f16302a33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-446584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c02d4ad-e56d-48ed-8997-bf2d74821744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22141"}}
	{"specversion":"1.0","id":"2e40658d-7b8c-463f-bfcc-41cca669837c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f84f595-49fc-4856-a18a-d61e885f9e3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig"}}
	{"specversion":"1.0","id":"d57b2337-372e-4e06-9665-a4516b9aa2d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube"}}
	{"specversion":"1.0","id":"9caa6dab-dbbd-4e4b-b863-c37cbcdd2d38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c658c944-f9a1-453b-9444-a0e026e0917b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4db5aaba-1f0b-4e51-9f2b-c4e41d4f682d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"93afe562-e41a-4f51-9351-b6c298feb20d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"88399d33-a4b5-452d-8993-ded874eef851","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f5a4e70-84c9-49af-828f-64b82b2e879e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"56404917-a02e-4399-a0a6-38c715da71c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-446584\" primary control-plane node in \"insufficient-storage-446584\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3680d8e6-8e71-457b-a619-633fefbdb023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"880faaf1-303d-4507-8ad3-ce8b7ba7731f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"693868c9-87ed-42ac-9bc4-c037e002f685","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-446584 --output=json --layout=cluster
E1216 04:58:27.387921    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-446584 --output=json --layout=cluster: exit status 7 (283.017609ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446584","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446584","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:58:27.427390  192796 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-446584" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-446584 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-446584 --output=json --layout=cluster: exit status 7 (278.202927ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446584","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446584","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:58:27.706615  192907 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-446584" does not appear in /home/jenkins/minikube-integration/22141-5066/kubeconfig
	E1216 04:58:27.716806  192907 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/insufficient-storage-446584/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-446584" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-446584
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-446584: (1.87757894s)
--- PASS: TestInsufficientStorage (11.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (45.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.828595142 start -p running-upgrade-466558 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.828595142 start -p running-upgrade-466558 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.274045703s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-466558 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-466558 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.68120767s)
helpers_test.go:176: Cleaning up "running-upgrade-466558" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-466558
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-466558: (2.643787063s)
--- PASS: TestRunningBinaryUpgrade (45.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (293.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.76129108s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-132399
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-132399: (2.218283019s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-132399 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-132399 status --format={{.Host}}: exit status 7 (76.521264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.264723336s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-132399 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (81.756271ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-132399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-132399
	    minikube start -p kubernetes-upgrade-132399 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1323992 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-132399 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-132399 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.34217693s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-132399" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-132399
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-132399: (2.407872098s)
--- PASS: TestKubernetesUpgrade (293.21s)

                                                
                                    
x
+
TestMissingContainerUpgrade (69.48s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2835732198 start -p missing-upgrade-808405 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2835732198 start -p missing-upgrade-808405 --memory=3072 --driver=docker  --container-runtime=crio: (19.943896852s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-808405
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-808405: (10.46949295s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-808405
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-808405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-808405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.748587892s)
helpers_test.go:176: Cleaning up "missing-upgrade-808405" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-808405
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-808405: (3.662895502s)
--- PASS: TestMissingContainerUpgrade (69.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestPause/serial/Start (88.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-484421 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-484421 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m28.876863397s)
--- PASS: TestPause/serial/Start (88.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2258646602 start -p stopped-upgrade-572334 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2258646602 start -p stopped-upgrade-572334 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.019524069s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2258646602 -p stopped-upgrade-572334 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2258646602 -p stopped-upgrade-572334 stop: (1.948419258s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-572334 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-572334 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m21.332825711s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-484421 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-484421 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.81034052s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679207 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-679207 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (75.520713ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-679207] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679207 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1216 05:02:00.446278    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679207 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.702569782s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-679207 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679207 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679207 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.624993015s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-679207 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-679207 status -o json: exit status 2 (291.154048ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-679207","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-679207
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-679207: (2.002100957s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679207 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1216 05:02:28.699201    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679207 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.746981647s)
--- PASS: TestNoKubernetes/serial/Start (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22141-5066/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-679207 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-679207 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.268376ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-679207
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-679207: (1.279375552s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-679207 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-679207 --driver=docker  --container-runtime=crio: (6.357253992s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-679207 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-679207 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.433633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-548714 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-548714 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (156.793618ms)

                                                
                                                
-- stdout --
	* [false-548714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:02:47.242398  254091 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:02:47.242528  254091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:47.242538  254091 out.go:374] Setting ErrFile to fd 2...
	I1216 05:02:47.242542  254091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:47.242773  254091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5066/.minikube/bin
	I1216 05:02:47.243334  254091 out.go:368] Setting JSON to false
	I1216 05:02:47.244499  254091 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2718,"bootTime":1765858649,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:02:47.244573  254091 start.go:143] virtualization: kvm guest
	I1216 05:02:47.246514  254091 out.go:179] * [false-548714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:02:47.247960  254091 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:02:47.247987  254091 notify.go:221] Checking for updates...
	I1216 05:02:47.250952  254091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:02:47.252152  254091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5066/kubeconfig
	I1216 05:02:47.253346  254091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5066/.minikube
	I1216 05:02:47.254427  254091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:02:47.255574  254091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:02:47.256968  254091 config.go:182] Loaded profile config "cert-expiration-123545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:02:47.257088  254091 config.go:182] Loaded profile config "kubernetes-upgrade-132399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:47.257166  254091 config.go:182] Loaded profile config "stopped-upgrade-572334": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 05:02:47.257257  254091 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:02:47.280035  254091 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1216 05:02:47.280142  254091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:47.332531  254091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-16 05:02:47.323385771 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 05:02:47.332632  254091 docker.go:319] overlay module found
	I1216 05:02:47.334370  254091 out.go:179] * Using the docker driver based on user configuration
	I1216 05:02:47.335643  254091 start.go:309] selected driver: docker
	I1216 05:02:47.335655  254091 start.go:927] validating driver "docker" against <nil>
	I1216 05:02:47.335665  254091 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:02:47.337595  254091 out.go:203] 
	W1216 05:02:47.338912  254091 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 05:02:47.340175  254091 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-548714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-548714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-123545
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 04:59:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-132399
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 04:59:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-572334
contexts:
- context:
cluster: cert-expiration-123545
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-123545
name: cert-expiration-123545
- context:
cluster: kubernetes-upgrade-132399
user: kubernetes-upgrade-132399
name: kubernetes-upgrade-132399
- context:
cluster: stopped-upgrade-572334
user: stopped-upgrade-572334
name: stopped-upgrade-572334
current-context: ""
kind: Config
users:
- name: cert-expiration-123545
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/cert-expiration-123545/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/cert-expiration-123545/client.key
- name: kubernetes-upgrade-132399
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kubernetes-upgrade-132399/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kubernetes-upgrade-132399/client.key
- name: stopped-upgrade-572334
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/stopped-upgrade-572334/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/stopped-upgrade-572334/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-548714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548714"

                                                
                                                
----------------------- debugLogs end: false-548714 [took: 3.099488559s] --------------------------------
helpers_test.go:176: Cleaning up "false-548714" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-548714
--- PASS: TestNetworkPlugins/group/false (3.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1216 05:03:27.387563    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-423958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.392534837s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-572334
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (45.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.187882229s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (45.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-545075 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [be3c107e-accd-498b-b511-153da24708a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [be3c107e-accd-498b-b511-153da24708a6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003149469s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-545075 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (40.016775122s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-545075 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-545075 --alsologtostderr -v=3: (16.124495621s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075: exit status 7 (86.403157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-545075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-545075 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.161759855s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-545075 -n old-k8s-version-545075
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (39.722922466s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-990631 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bb35de7b-e677-4d9d-9a4e-fb0c49e18b0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bb35de7b-e677-4d9d-9a4e-fb0c49e18b0a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003644802s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-990631 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-127599 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [253698b2-29a2-40df-963b-00ac87e3af9f] Pending
helpers_test.go:353: "busybox" [253698b2-29a2-40df-963b-00ac87e3af9f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [253698b2-29a2-40df-963b-00ac87e3af9f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004797275s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-127599 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-990631 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-990631 --alsologtostderr -v=3: (18.771531448s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-127599 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-127599 --alsologtostderr -v=3: (18.149426935s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631: exit status 7 (77.919018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-990631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-990631 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.89108456s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-990631 -n no-preload-990631
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-375252 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c17e2533-1c59-43f6-b58d-8825c7f2d554] Pending
helpers_test.go:353: "busybox" [c17e2533-1c59-43f6-b58d-8825c7f2d554] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c17e2533-1c59-43f6-b58d-8825c7f2d554] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00399469s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-375252 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599: exit status 7 (106.094661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-127599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-127599 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.054058293s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-127599 -n embed-certs-127599
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pv585" [3222436e-fc8d-4cce-8235-3c3b4766d3db] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003493335s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (20.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-375252 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-375252 --alsologtostderr -v=3: (20.314634282s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (20.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pv585" [3222436e-fc8d-4cce-8235-3c3b4766d3db] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005628474s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-545075 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-545075 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (25.18857293s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252: exit status 7 (97.389685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-375252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1216 05:05:31.769269    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-375252 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (49.263895949s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375252 -n default-k8s-diff-port-375252
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rvxh6" [5acd379e-b1f0-4393-8650-36caf406f779] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003984453s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pdn5g" [f2b5fe05-a437-44eb-9d53-7f3e646753de] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003989955s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rvxh6" [5acd379e-b1f0-4393-8650-36caf406f779] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004226165s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-990631 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-990631 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pdn5g" [f2b5fe05-a437-44eb-9d53-7f3e646753de] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003523143s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-127599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-418580 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-418580 --alsologtostderr -v=3: (2.454632479s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580: exit status 7 (89.113022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-418580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-418580 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (12.051604811s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418580 -n newest-cni-418580
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-127599 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.974973043s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.241746315s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-418580 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.412356535s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bbfmr" [ce088b60-6c16-42c0-8eb6-514021eb3e34] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005040584s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bbfmr" [ce088b60-6c16-42c0-8eb6-514021eb3e34] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004151813s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-375252 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-375252 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-548714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-548714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-55k5k" [fb706da5-0234-4629-aa29-207badda82c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-55k5k" [fb706da5-0234-4629-aa29-207badda82c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00433633s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.805855899s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-548714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-qnndz" [1b6efc5f-9609-4331-8307-b6c529676897] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006669992s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-548714 "pgrep -a kubelet"
I1216 05:06:55.064553    8592 config.go:182] Loaded profile config "kindnet-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-548714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-t7b9j" [95ad2430-eefc-48fb-a6f4-66ffe958a120] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-t7b9j" [95ad2430-eefc-48fb-a6f4-66ffe958a120] Running
E1216 05:07:00.445987    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/functional-767327/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003981315s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-548714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-gsk6w" [65842a62-7fd6-4c67-b89b-1d11d41f1223] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-gsk6w" [65842a62-7fd6-4c67-b89b-1d11d41f1223] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003904741s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m2.570758092s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-548714 "pgrep -a kubelet"
I1216 05:07:15.466595    8592 config.go:182] Loaded profile config "calico-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-548714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-kpnn6" [8fc0be51-9650-4565-8d3d-3d82019b6c55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-kpnn6" [8fc0be51-9650-4565-8d3d-3d82019b6c55] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004152526s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-548714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1216 05:07:28.699396    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/addons-920118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (45.184578967s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-548714 "pgrep -a kubelet"
I1216 05:07:33.465603    8592 config.go:182] Loaded profile config "custom-flannel-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-548714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6cl5n" [084081d8-5fe2-44f0-be51-900cd3b2923a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6cl5n" [084081d8-5fe2-44f0-be51-900cd3b2923a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004280558s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-548714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-548714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.417690503s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-qtwgf" [d727be04-f2f6-400c-93b0-07f98faa08ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003246858s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-548714 "pgrep -a kubelet"
I1216 05:08:12.735262    8592 config.go:182] Loaded profile config "enable-default-cni-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-548714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6q957" [50063500-271b-4956-aa2b-c49c08a769a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6q957" [50063500-271b-4956-aa2b-c49c08a769a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003826642s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-548714 "pgrep -a kubelet"
I1216 05:08:17.286299    8592 config.go:182] Loaded profile config "flannel-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-548714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-t7kln" [7eba9f6a-ec16-4aab-9555-928165293ded] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-t7kln" [7eba9f6a-ec16-4aab-9555-928165293ded] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003895177s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-548714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-548714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-548714 "pgrep -a kubelet"
I1216 05:08:48.920411    8592 config.go:182] Loaded profile config "bridge-548714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-548714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-4nr7h" [ad9e68d6-ab01-4938-8fc5-6ddeecc9d376] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 05:08:49.928831    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-4nr7h" [ad9e68d6-ab01-4938-8fc5-6ddeecc9d376] Running
E1216 05:08:55.050593    8592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/old-k8s-version-545075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003351742s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-548714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-548714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
159 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
160 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
161 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
372 TestStartStop/group/disable-driver-mounts 0.17
387 TestNetworkPlugins/group/kubenet 3.29
395 TestNetworkPlugins/group/cilium 3.63
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-148966" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-148966
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-548714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-548714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-123545
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 04:59:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-132399
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 04:59:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-572334
contexts:
- context:
cluster: cert-expiration-123545
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-123545
name: cert-expiration-123545
- context:
cluster: kubernetes-upgrade-132399
user: kubernetes-upgrade-132399
name: kubernetes-upgrade-132399
- context:
cluster: stopped-upgrade-572334
user: stopped-upgrade-572334
name: stopped-upgrade-572334
current-context: ""
kind: Config
users:
- name: cert-expiration-123545
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/cert-expiration-123545/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/cert-expiration-123545/client.key
- name: kubernetes-upgrade-132399
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kubernetes-upgrade-132399/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kubernetes-upgrade-132399/client.key
- name: stopped-upgrade-572334
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/stopped-upgrade-572334/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/stopped-upgrade-572334/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-548714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548714"

                                                
                                                
----------------------- debugLogs end: kubenet-548714 [took: 3.125882153s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-548714" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-548714
--- SKIP: TestNetworkPlugins/group/kubenet (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-548714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-548714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-123545
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 04:59:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-132399
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5066/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 04:59:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-572334
contexts:
- context:
cluster: cert-expiration-123545
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-123545
name: cert-expiration-123545
- context:
cluster: kubernetes-upgrade-132399
user: kubernetes-upgrade-132399
name: kubernetes-upgrade-132399
- context:
cluster: stopped-upgrade-572334
user: stopped-upgrade-572334
name: stopped-upgrade-572334
current-context: ""
kind: Config
users:
- name: cert-expiration-123545
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/cert-expiration-123545/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/cert-expiration-123545/client.key
- name: kubernetes-upgrade-132399
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kubernetes-upgrade-132399/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/kubernetes-upgrade-132399/client.key
- name: stopped-upgrade-572334
user:
client-certificate: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/stopped-upgrade-572334/client.crt
client-key: /home/jenkins/minikube-integration/22141-5066/.minikube/profiles/stopped-upgrade-572334/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-548714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-548714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548714"

                                                
                                                
----------------------- debugLogs end: cilium-548714 [took: 3.461096467s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-548714" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-548714
--- SKIP: TestNetworkPlugins/group/cilium (3.63s)

                                                
                                    
Copied to clipboard